ChatGPT 5 Exposed: What OpenAI Isn't Telling You About Its New AI
- Juan Allan
- Aug 11
- 4 min read
ChatGPT 5 revolutionizes AI with PhD-level capabilities but has critical security vulnerabilities and worrying accuracy issues

The arrival of ChatGPT 5 marks a turning point in the evolution of artificial intelligence, promising unprecedented capabilities while raising significant questions about its social and economic impact. This model presents notable advances in reasoning, accuracy, and specialized applications, but it also exposes troubling vulnerabilities that require careful analysis.
What’s Behind the New ChatGPT
OpenAI positions GPT-5 as a qualitative leap comparable to “conversations with a PhD-level expert,” as CEO Sam Altman described during the launch. The company implemented a unified system that eliminates the need to manually select between different models, allowing the system to automatically decide when to apply deep reasoning or quick responses.
“GPT-5 is the best model we've created for healthcare, outperforming all previous models on HealthBench, an evaluation we created with 250 doctors on real-world tasks,” Altman said, according to reports from HLTH. This specialized healthcare capability represents one of the model's most promising benefits, with documented cases where patients like Carolina used the system to interpret complex biopsy reports after being diagnosed with three types of cancer in one week.
The functionality called “vibe coding” allows for the creation of complete applications through simple prompts, potentially transforming software development. Microsoft has integrated GPT-5 into its enterprise products, while companies such as Amgen report significant improvements in scientific accuracy and analysis speed.
The Risks Beneath the Code
However, independent security teams have identified alarming vulnerabilities within 24 hours of launch. NeuralTrust demonstrated “jailbreaking” techniques using methods such as Echo Chamber and targeted narratives, causing GPT-5 to generate instructions for creating Molotov cocktails without triggering security filters.
“GPT-5, with all its new ‘reasoning’ updates, fell to basic adversarial logic tricks,” warned SPLX researchers, according to reports by Cybernews. Their tests revealed that the model without additional protections has an 89% success rate against attacks, scoring only 11 out of 100 in security resilience.
Particularly concerning is the StringJoin Obfuscation technique, where inserting hyphens between characters and masking them as an “encryption challenge” completely bypasses the model's safeguards. “The unprotected GPT-5 model is virtually unusable for businesses as configured,” concluded SPLX researchers.
The Hallucination Problem and The Economic Impact
Although OpenAI claims significant reductions in hallucinations, GPT-5 still has problematic rates. With web access, the model has a hallucination rate of 9.6%, compared to 12.9% in GPT-4o. However, without internet access, these figures skyrocket to 47% for GPT-5 main and 40% for GPT-5 thinking.
“You spend a lot of time trying to figure out which answers are factual and which are not,” explained Pratik Verma, CEO of Okahu, according to The New York Times. This persistence of incorrect information poses serious risks for applications in critical fields such as medicine, law, and finance.
On the other hand, the economic impact of GPT-5 generates conflicting perspectives. Sam Altman expressed particular optimism toward Generation Z, stating that “if I were graduating from college right now, I would feel like the luckiest kid in history,” according to Fortune. However, he acknowledges that “some kinds of jobs will disappear completely.”
OpenAI reports that 28% of employed adults in the United States who have used ChatGPT now use it at work, compared to only 8% in 2023. “ChatGPT has saved teachers nearly six hours per week on tasks; it saved state workers in Pennsylvania an average of 95 minutes per day on routine tasks,” according to the company's economic analysis.
Goldman Sachs predicts that half of the entry-level white-collar workforce could be replaced by AI in five years. However, Altman suggests that “completely new, exciting, super well-paid, and super interesting jobs” will emerge, although he admits that predicting the future beyond 10 years is “very difficult to imagine.”
Ethical and Regulatory Considerations
Ethical concerns span multiple dimensions. In the realm of privacy, GPT-5's ability to integrate with Gmail and Google Calendar raises concerns about the handling of sensitive personal data. “The collection, storage, and processing of sensitive patient information raises important privacy issues,” warns an analysis published in PMC on ethical considerations in healthcare.
Transparency represents another critical challenge. “Customers have a right to know if their interaction was mediated by AI,” emphasizes Talkdesk in its ethical analysis. Algorithmic bias perpetuated in training data can lead to discrimination in critical areas such as hiring, lending, and judicial decisions.
Legal liability remains ambiguous when GPT-5 provides inappropriate medical advice. “Determining legal liability in cases where ChatGPT's advice leads to harm can become a complex issue,” notes PMC's research, highlighting the urgent need for comprehensive regulatory frameworks.
Future Outlook of ChatGPT
GPT-5 simultaneously represents the transformative potential of AI and its inherent risks. Its capabilities in healthcare, software development, and complex analysis offer tangible benefits, but security vulnerabilities and persistent accuracy issues require extreme caution.
To maximize benefits while minimizing risks, organizations must implement additional layers of security, continuous monitoring, and human validation. Education about AI limitations and the development of robust ethical frameworks are essential for responsibly navigating this new technological era.



Comments