top of page

Cybersecurity as a US Business Enabler: Debunking Cybersecurity Bubbles and Closing Critical Gaps with John Alford

  • Writer: Juan Allan
    Juan Allan
  • Oct 31
  • 15 min read

John Alford on evolving cybersecurity: AI-driven operations, regulatory shifts, and closing the talent gap. Insights from a CISO & AI Governance Advisor


ree

The most effective cybersecurity programs are no longer just about defense but have evolved into intelligence-driven operations. True resilience is achieved by integrating Deterministic, Generative, and Agentic AI within a governance framework that aligns real-time data with regulatory standards like ISO 27001:2022 and DORA.


But what does it actually take to build a program that can process a billion security events a day and not just survive, but anticipate the next threat? Our guest today, John Alford, has done exactly that. He is a CISO at TeraType and AI Governance Advisor with deep expertise in standards like ISO 27001, SOC 2, and HIPAA.


He brings a critical perspective on how the US cybersecurity market has shifted from compliance-driven checklists to automated, intelligence-led operations and reveals where the biggest bubbles and gaps in funding are today.


Interview with John Alford


How has the cybersecurity market in the United States evolved over the last three to five years, and which subsectors are showing the fastest growth?


The US cybersecurity market has evolved from compliance driven defense to intelligence led operations centered on automation and resilience. Over the last several years I managed large scale environments processing more than one billion daily security events across roughly three thousand endpoints. That operational scale made it impossible to rely solely on manual triage, so I moved first toward deterministic automation using rule based detection and static signatures.


As the environment matured, I integrated generative and agentic artificial intelligence to interpret complex event patterns, correlate behavioral anomalies, and predict emerging threats. This transformation shifted the function of the Security Operations Center from reactive alert handling to continuous data driven threat anticipation aligned with enterprise risk tolerance and board expectations.


Fast growing generative and agentic AI now form the core of detection, triage, analysis and response. Generative AI models read event streams, summarize attack narratives, and suggest initial containment actions derived from structured historical response data. Agentic AI agents take this further by autonomously validating indicators, testing containment hypotheses, and escalating verified threats into playbooks for analyst review. I use Retrieval Augmented Generation (RAG) to merge real time telemetry with contextual knowledge from previous incidents, policy documents, and vulnerability databases, allowing analysts to see cause, impact, and resolution in a single view. Together these systems accelerate mean-time-to-detect and mean-time-to-respond by more than seventy percent, while maintaining full traceability for audit and compliance evidence.


This evolution parallels the new control expectations in ISO/IEC 27001:2022 and the Digital Operational Resilience Act (DORA). The 2022 update added controls for threat intelligence, monitoring, and data masking that require measurable automation and analytics to demonstrate continuous assurance. AI generated incident summaries feed directly into risk registers and evidence repositories to meet these documentation standards. DORA adds further pressure by requiring operational resilience metrics and third party dependency mapping at the board level, prompting tighter integration between governance, cloud management, and security operations. The fastest growth continues in cloud security, identity management and managed detection, but the real differentiator is now the use of deterministic, generative, and agentic AI within a governance framework that aligns real time telemetry, ISO 27001:2022 compliance, and DORA resilience into one cohesive risk narrative.


Which customer segments are driving cybersecurity growth in the United States?


Enterprises remain the dominant force behind cybersecurity growth in the United States. Large financial institutions, healthcare systems and global technology providers continue to expand programs to align with ISO 27001:2022, ISO 42001 for AI management, PCI DSS, HITRUST, SOC 2, and the California Consumer Privacy Act aka CCPA. I have managed environments processing over a billion events each day where deterministic, generative and agentic artificial intelligence work together to automate detection, triage, and response.


These organizations are also embedding AI trust and safety frameworks into their governance structures to ensure ethical use of automation, transparency of decision logic and protection of sensitive data. Cybersecurity has become part of enterprise value, and boards now view it as both a revenue enabler and a measure of customer confidence.


Government and defense sectors are accelerating investment as they modernize systems and strengthen supply chain security. The Cybersecurity Maturity Model Certification or CMMC is driving this momentum by requiring defense contractors and service providers to prove maturity across access control, incident response, auditing, and resilience. I have supported public sector clients in aligning with CMMC and the National Institute of Standards and Technology frameworks such as NIST 800-171 and 800-53, using retrieval augmented generation to automate control mapping and audit readiness. These programs are coupling artificial intelligence with deterministic validation to detect anomalies and classify incidents faster than traditional review cycles allow. As Federal agencies expand zero trust and continuous monitoring, cybersecurity spending has become embedded in long term infrastructure and operational budgets.


The private sector beyond the enterprise level is contributing, but at a slower and more selective pace. Many midmarket firms are adopting managed detection and compliance services rather than building in house programs. They are choosing automation and retrieval augmented generation platforms to meet client or regulatory expectations without full internal teams. This segment is growing steadily, yet the market’s real acceleration remains centered on large enterprises and government organizations that are using artificial intelligence, control frameworks and governance modernization to integrate cybersecurity into the fabric of business continuity and national resilience.


What are the most pressing technical and operational challenges U.S. organizations face today when trying to secure hybrid and multi cloud environments?


The October 2025 AWS and Microsoft outages proved how fragile large scale cloud systems can be. A DNS propagation fault and a control plane synchronization failure cascaded across global regions and interrupted authentication orchestration and data services. Containers could not reach metadata endpoints and identity tokens expired before renewal while telemetry pipelines stopped sending data.


Security teams lost situational awareness because their monitoring tools depended on the same broken infrastructure they were trying to observe. These events forced executives to realize that resilience planning vendor diversification and contractual visibility are now core elements of enterprise risk management and should be tracked alongside financial and regulatory exposure.


Visibility remains the hardest technical problem in hybrid and multi cloud environments. Each cloud provider produces telemetry with different schemas time stamps and retention settings that make correlation unreliable. Network flow records rarely align with endpoint data and cross platform analysis often requires heavy normalization. Generative and agentic AI models are starting to close these gaps by parsing raw telemetry inferring missing context and grouping related anomalies to expose lateral movement. For leadership this unified view translates into clearer risk quantification and stronger compliance with 27001, 42001, PCI DSS, etc that supports investor and regulator confidence in the organization’s ability to manage exposure.


Identity control continues to create major operational friction. Multi cloud systems often use separate directories and inconsistent authentication models that leave gaps between environments. Zero trust helps when applied with continuous verification and behavioral analytics but integration across platforms remains difficult. Deterministic policies define access limits while agentic AI observes session behavior scores risk and ends access when patterns deviate from normal.


Keeping up with the pace of change is another constant challenge. Cloud services roll out new features and configuration updates daily while internal teams deploy hundreds of containers in the same time frame. Manual reviews cannot keep up so retrieval augmented generation systems are being used to connect live telemetry to control frameworks and generate compliance evidence automatically.

These tools verify encryption settings storage policies and patch levels continuously and detect drift before it becomes an incident. For executives this automation transforms compliance from a periodic exercise into a living control system that demonstrates resilience maturity and operational readiness.


Are there common architectural or process mistakes you see repeatedly?


Yes and they tend to repeat across every industry no matter how advanced the organization thinks it is :) The first mistake is treating cloud architecture as if it were still a data center. Teams lift and shift workloads without rethinking network segmentation, privilege design or telemetry collection. They keep the same flat networks, long-lived credentials, and static firewall rules that attackers exploit immediately. The result is an illusion of control that hides fragmented logging and inconsistent identity boundaries. At the board level this looks like technical debt disguised as cloud adoption, and it usually surfaces when an audit or incident reveals how incomplete the migration really was.


Another recurring problem is incomplete visibility and poor control inheritance. Many companies depend on the provider’s default settings and assume those controls meet their compliance needs. Default configurations often leave logging disabled or incomplete, which makes forensic reconstruction almost impossible when incidents occur. I frequently see environments with overlapping agent deployments that collect redundant data while missing key sources like API gateway or DNS telemetry. This wastes budget, inflates noise, and gives a false sense of coverage. Executives who think they are funding a mature monitoring capability discover that what they have is fragmented observability, which fails both regulators and insurers when evidence is requested.


Identity sprawl is also a universal issue. Every new platform adds another set of credentials, tokens, and roles that expand faster than governance can keep up. Teams create exceptions for convenience and forget to revoke them, leading to excessive privilege and orphaned accounts. Even with single sign-on and multi factor authentication, these gaps remain the root cause of many breaches. The fix requires automation, just-in-time provisioning and periodic validation tied directly to change management. Boards are starting to measure identity metrics the same way they track financial controls because every access decision now carries measurable risk and regulatory accountability.


Another mistake is underestimating the value of configuration and change discipline. Cloud environments evolve daily and minor drift accumulates into significant exposure over time. Many organizations lack version control for policies, security groups, or encryption settings, which makes rollback or audit reconstruction difficult. Retrieval augmented generation tools can now track these changes continuously and document compliance alignment automatically, but adoption remains slow. The most successful organizations treat configuration management as a form of internal audit, translating technical precision into visible proof of governance for customers, regulators and shareholders alike.


How is funding shaping innovation in cybersecurity, and are investors prioritizing specific technologies or business models?


Funding in cybersecurity has become more strategic than speculative. Venture capital is moving away from pure product plays toward platforms that integrate detection, response and compliance automation. Investors want business models that scale through recurring revenue, cloud delivery and integrations with AI. Corporate venture arms are focusing on technologies that align with their own resilience goals such as identity orchestration, cloud posture management and automated threat intelligence. Boards and investors alike now value measurable risk reduction and regulatory alignment over flashy features or marketing buzz.


AI driven security is attracting the most attention. Venture groups are backing companies that apply deterministic, generative and agentic AI to automate triage, analysis, and response at scale. Retrieval augmented generation is being applied to compliance reporting, risk scoring, and SOC automation. These capabilities lower cost per alert and address the staffing shortage that every investor understands. The funding narrative has shifted from “next-gen analytics” to “validated automation that proves its ROI.”


Government funding and public private programs are also shaping the field. Federal and state grants are flowing toward AI assurance, supply chain defense and workforce development under initiatives tied to the National Cybersecurity Strategy. Defense contractors and research universities are forming joint ventures to meet Cybersecurity Maturity Model Certification and zero trust mandates. Mergers and acquisitions are consolidating niche tools into end-to-end platforms that promise operational resilience for critical infrastructure and finance. Investors are rewarding companies that deliver measurable compliance, continuous assurance, and cross-sector scalability rather than one-off security point solutions.


Are funding patterns creating any bubbles or gaps in capability?


Absolutely! The patterns are becoming more visible as capital concentrates around a few high visibility trends. Venture money is chasing anything labeled AI security, even when the product is just an analytics wrapper on top of legacy detection logic. Many startups are overvalued on promise rather than demonstrable efficacy which inflates valuations without solving real operational gaps. Meanwhile, critical but less glamorous areas like configuration management, patch automation and secure software supply chain tooling remain underfunded. Boards will eventually see that resilience depends on these unglamorous capabilities not just glitzy headlines.


Another bubble is forming around compliance automation and “instant audit readiness.” Investors love the narrative of automated governance yet few products truly understand frameworks such as 27001, SOC 2, or PCI DSS beyond surface mapping. Most tools generate polished dashboards but still require human verification that limits scalability and introduces false assurance. The result is a compliance theater that looks mature but falters under real scrutiny. This misalignment leaves organizations exposed in the very area they thought that they automated.


At the same time, serious capability gaps persist in identity hygiene, operational technology security and workforce development. AI models can summarize alerts but cannot yet replace disciplined engineering or skilled analysts. Federal and state grants are trying to fill these shortages especially around CMMC and critical infrastructure resilience but private funding remains thin in these spaces. The next correction will likely reward builders who focus on interoperability, human-AI collaboration and measurable control maturity. Investors who recognize that sustainable security comes from depth, not hype, will define the next phase of the market.


How effective are current U.S. regulatory and compliance frameworks at improving baseline security, and where do they fall short?


U.S. regulatory and compliance frameworks have raised the security baseline but struggle to keep pace with technology and threat velocity. NIST guidance remains the backbone of most programs offering clear control families and maturity paths that drive structure into risk management. Sector specific frameworks like HIPAA and PCI DSS have reduced low hanging exposure by enforcing encryption, access control and breach notification requirements. State breach laws, particularly in California and New York increased public accountability by forcing rapid disclosure and tightening data handling. These efforts have made security measurable and reportable which boards and insurers now view as essential.


The problem is that most frameworks are descriptive rather than adaptive. They prescribe control outcomes but not continuous assurance, which leaves gaps in hybrid and multi-cloud environments where configurations change hourly. Auditors measure documentation, not telemetry, so compliance can lag months behind real conditions. I often see firms pass certification while still running outdated services or incomplete identity validation because the frameworks cannot verify in real time. As a result, compliance proves governance maturity but not actual resilience, which limits its value in preventing breaches or downtime.


Another limitation is fragmentation across jurisdictions and sectors. Financial, healthcare, and telecommunications entities follow different rules enforced by separate agencies like the FCC, HHS, and OCC each with distinct definitions of incident and exposure. This patchwork forces companies operating across states or sectors to maintain overlapping evidence sets and redundant audits. The burden diverts resources from engineering to paperwork. Boards increasingly want a harmonized regulatory baseline that aligns with international standards such as 27001 and the EU’s Digital Operational Resilience framework so that compliance work produces both legal coverage and real technical improvement. Until frameworks evolve toward continuous monitoring, shared terminology and cross-sector alignment they will raise awareness but stop short of delivering the operational resilience they aim to ensure.


What regulatory changes would have the largest positive impact with the least friction?


A unified national breach reporting standard would yield the greatest benefit with minimal disruption. A single federal rule defining what constitutes a breach, when notification is required and how reporting must occur would replace the current state-by-state framework that creates cost, delay and confusion. A secure centralized reporting portal could support structured data submissions and real-time aggregation of incident intelligence. Safe harbor provisions for organizations using encryption, validated detection, and tested response plans would encourage earlier disclosure and more honest communication. For boards, this change would replace uncertainty with clarity and make breach reporting a manageable governance function rather than a reputational crisis.


Moving from periodic compliance audits to continuous assurance would also have significant impact. Permitting telemetry and configuration data to serve as validated control evidence would allow regulators to assess compliance in real time rather than through static snapshots. Aligning evidence formats with frameworks such as 27001 and NIST would enable reuse of validated control data across multiple certifications and industries. Immutable log data and automated evidence mapping would reduce both audit fatigue and human error. For executive teams, this modernization would convert compliance from a cost of doing business into a continuous performance metric tied to operational resilience and insurance qualification.


Improving software supply chain and third party oversight would close some of the most persistent security gaps Requiring a Software Bill of Materials (SBoM), digitally signed build artifacts and time-bound vulnerability disclosure would give regulators and customers visibility into the full dependency chain. Broader adoption of STIX (Structured Threat Information Expression) and TAXII (Trusted Automated eXchange of Indicator Information) would standardize threat and vulnerability data exchange across private and public sectors. STIX defines a consistent data model for describing cyber threats, indicators, and relationships, while TAXII provides a secure transport mechanism for sharing that structured intelligence between systems. Embedding these standards into regulatory guidance would accelerate detection and coordinated response without adding compliance overhead.


What workforce and talent challenges is the U.S. cyber industry facing, and which strategies have proven most effective for building and scaling skilled teams?


The often repeated cybersecurity talent shortage is only partly accurate. There are qualified professionals available but many organizations undervalue them, expect unrealistic qualifications and fail to invest in proper development. Job listings frequently demand advanced certifications, cloud expertise and years of experience for salaries that barely cover entry level expectations. Instead of training and mentoring, many companies default to buying more tools as a substitute for building capability. This overreliance on technology hides deeper management problems and leaves leadership unprepared to sustain an effective, motivated team. Boards that treat workforce development as a control function rather than a cost center typically discover that the talent problem is far smaller than it appeared.


Retention and diversity remain major weaknesses across the sector. Security operations centers lose staff rapidly due to excessive workload, limited career growth and constant alert fatigue caused by overlapping monitoring tools. Diversity programs often stop at recruiting rather than ensuring representation in leadership and technical architecture roles. Teams that consolidate platforms, remove redundant alert sources and invest in targeted automation improve both morale and performance. Those that pair this with structured coaching and cross training across compliance, risk, and engineering create a culture that retains knowledge and improves operational maturity.


The most effective workforce models scale through structure, education and selective automation rather than headcount. Organizations that build internal academies, define clear career paths and provide continuous training create deeper technical capability than competitors that rely on external hiring. Upskilling internal IT and cloud engineers into security roles is faster and more sustainable than competing for scarce external candidates. Artificial intelligence and analytics tools can remove repetitive manual tasks but still require human oversight and contextual understanding. The most resilient teams balance technology with experienced judgment, supported by leadership that measures success in risk reduction and knowledge retention rather than the number of platforms deployed.


How important are apprenticeship and reskilling programs compared to hiring experienced practitioners?


Apprenticeship and reskilling programs are the closest thing cybersecurity has to a long-term cure for its staffing problems. The endless hunt for “experienced practitioners” has turned into an expensive habit that rarely delivers real capability.


Hiring veterans fills gaps quickly but often just reshuffles talent across the same small circle of employers. Apprenticeships create loyalty, depth and institutional memory which are three things money cannot buy in a bidding war. Companies that build their own talent pipelines eventually stop complaining about shortages because they grow the skills they need instead of renting them.


Reskilling existing IT or DevOps staff often produces better outcomes than chasing résumés across LinkedIn. These employees already know the systems, the people, and the business context, which makes them far more effective once trained. A focused six month program can produce analysts who outperform outsiders with twice the experience. The key is leadership willing to invest in mentorship instead of another monitoring tool as you can’t automate judgment, but you can teach it.


Experienced hires still have value, but they should be multipliers not replacements. The smartest teams mix seasoned professionals with apprentices who bring curiosity and stamina. Senior engineers set standards while newcomers push innovation and energy. Companies that rely only on veterans eventually stall, while those that mix both keep evolving.

How resilient is the U.S. critical infrastructure and supply chain ecosystem to cyberattacks, and what practical steps should organizations and policymakers prioritize to reduce systemic risk?


Resilience has improved but the system is still fragile. The Colonial Pipeline attack in May 2021 proved that one compromised laptop can disrupt a national fuel network. Log4Shell in December 2021 and the Microsoft Exchange exploits in early 2022 showed how fast a single flaw can spread through every industry that uses shared software.


2024 brought the Change Healthcare outage and the XZ Utils backdoor, both reminders that the real threat often hides inside trusted supply chains. When AWS and Microsoft both suffered DNS outages in October 2025, it drove the point home that even the biggest players can go dark without warning. The lesson is that our infrastructure is more connected, more interdependent and more brittle than we like to admit.


The software supply chain is still the soft underbelly of the entire system. Every company relies on open source components, but very few can tell you exactly which ones or who maintains them. The introduction of software bills of materials is progress but too many exist as static spreadsheets that no one updates or verifies. True resilience means automating those inventories, integrating them into build systems and verifying that code is signed, current and supported. Without that kind of transparency, we are still trusting what we cannot see and attackers continue to exploit what we have not checked. The uncomfortable truth is that no one owns the problem so everyone ends up sharing the risk.


Even digital trust itself is entering a risky transition. The planned reduction of certificate lifespans from 398 days to 90 days in 2026 will tighten security but create new operational headaches. Without automation, some organizations will take themselves offline simply because certificates expire faster than teams can renew them. That may sound minor until it happens to a hospital or a payment processor. Automating issuance, renewal, and revocation needs to be treated as a core security control not an IT maintenance task. The ability to manage digital trust quickly and accurately will soon matter as much as patching or access control.


Reducing systemic risk requires better visibility, faster action and more honesty about shared weaknesses. Companies need to track their assets and suppliers in real time not through spreadsheets built for last quarter’s audit. Policymakers should promote continuous monitoring, standardized incident reporting and coordinated red teaming of essential services. After all the advances in Cloud, AI, Automation, Audits, Standards, Laws, Fines, Suits, etc passwords and DNS weaknesses still cause the biggest failures just as did 35 years ago.

Comments


bottom of page