top of page

Navigating the AI-Cybersecurity Convergence: The New Frontier in Australia's Market with Jason Ha

  • Writer: Juan Allan
    Juan Allan
  • Jan 21
  • 6 min read

Jason Ha analyzes Australia's cybersecurity market, AI's dual role in risk & defense, and critical industry challenges. Essential insights for leaders navigating digital transformation



What if the greatest cybersecurity threat facing Australian organisations today isn't a sophisticated hacker or a novel AI-powered attack, but the accelerating complexity of their own digital ecosystem? As companies rush to adopt AI and lean on an ever-expanding network of third-party vendors, traditional risk models are fracturing.


To navigate this convergence of innovation and vulnerability, we spoke with Jason Ha, CISO at Ethan and a seasoned expert in security architecture and strategic risk management. In this interview, he dissects Australia's cybersecurity landscape, revealing why the market is shifting focus, how AI is both a shield and an accelerator of risk, and the critical frameworks businesses need to survive the next five years.


Interview with Jason Ha


How is the cybersecurity market in Australia currently performing, and what factors are driving its growth (e.g. regulation, threat landscape, digital transformation)?


From my direct observation, I’d say the cyber security market at the start of 2025 was relatively slow as many organisations began to reprioritise their focus and budgets from cybersecurity to AI related initiatives. What this gave rise to is a stronger focus on areas of data governance, data protection and identity management.


Cybersecurity projects which related to these particular areas did see stronger growth. The more commoditised areas of cyber security (penetration testing, security assessments and audits, security operation services) whilst continued to be in demand, saw increasing competition from both global and local service providers vying for this business.


What role is artificial intelligence playing in strengthening cybersecurity capabilities, and how is it also increasing cyber risk in Australia?


I will take this question in context of using AI within cybersecurity as opposed to the security considerations of AI in general. We are seeing a significant enhancement in cyber related capabilities through the use of AI – especially in traditionally resource intensive areas.


For example, Managed Detection and Response and Security Operations Centres have benefited from a number of the “AI” related enhancements that have been made available from vendors supporting this space to speed up and in some instances streamline the analytics and decision making when indicators of compromise have been detected.


We have also gone as far as seeing some AI agentic related solutions augmenting capabilities in a traditional SOC, providing a more efficient and cost effective way to bolster an organisation’s existing SOC team. What’s more, the AI agents have direct integration into a number of the supporting controls (e.g. endpoints and firewalls) allowing the AI agent to make and prepare recommendations relevant to these specific controls and technologies so the most effective action can be taken in the least amount of time.


As with all things AI – the risk is not necessarily in the use of AI itself, it is in the general understanding of what the AI is doing. For organisations who have well defined processes, appropriately controlled access and correct governance over their information, the “acceleration” of processes achieved through AI can be safely managed.


Organisations who do not have the same level of rigour but hope to use AI to perform “short cuts” are typically more exposed as they are essentially entrusting the AI to do the right thing with potential liberties within their environment. The risk therefore is not in the usage of AI itself, but in the lack of controls that existed to begin with (which AI simply compounds when it performs processes that are less secure at high speed).


Which industries in Australia are most exposed to cyber risk today, and which are adopting AI-driven cybersecurity solutions the fastest—and why?


According to the ACSC 2024-2025 Annual Threat Report, Government (Federal and State), Financial Services and then Healthcare are the most targeted industries. Whilst there is more perceived financial reward for attacks by targeting Financial Services, generally the Healthcare sector within Australia has been a lucrative target with a number of healthcare organisations being compromised with ransomware related attacks over the past few years.


By observation, quite a number of organisations across all industries are rapidly exploring ways to achieve economic benefit through the introduction of AI, but typically the fastest to realise benefit from doing so are services related industries (such as legal, consulting etc). Whether it is to speed up research and analysis tasks, streamline report writing, rapidly create presentation content – services organisations typically find with modest investment in AI – there is a significant cost reduction in resourcing time and effort which yields immediate and tangible returns. This shift is consistent with trends happening in services industries globally.


Other sectors we are seeing with a strong focus in AI are financial services related organisations looking to augment resource intensive areas such as customer support centres through the use of AI technology to: identify and prevent fraudulent activities, improve customer experience, provide real-time quality assurance and perform detailed sentiment analysis. Such activities would previously require the recording and review (by managers, quality assurance specialists and fraud specialists) but with the introduction of AI, many of these things can be done in real-time alongside the customer support officer. Some organisations are going as far as experimenting with Agentic based customer service agents.


What are the main challenges facing Australia’s cybersecurity and cyber risk market, particularly around AI-driven threats and compliance?


The two predominant challenges in this space are: 1. the maturity of the organisation’s general risk management capability and 2. The ability for regulation and practical frameworks to be available, understandable and enforceable.


For the first part, managing the risks of AI requires foundational risk management capabilities and the appropriate application of controls to manage those identified risks. Typically the challenge is the organisation’s ability to properly define, assess and understand the most effective controls in the management of these risks. However, when it comes to AI related risks, there are still often challenges that arise from risk ownership (is this a new risk or the same as an existing risk but just with different context?), risk mitigation (do we need new controls or are our existing controls just not working the way they should?) and as always, risk appetite (how much risk are we actually willing to accept without either stifling the innovation that AI brings or does not provide the risk return commensurate to the amount of money we need to spend on controls?).


With some of the newer frameworks, we are seeing organisations refocus on uplifting their Governance, Risk Management and Compliance to cover not just cyber, privacy and data protection but with a renewed focus on the ethical use of data (including information generated from external sources) and looking for “as code” approaches to automate the validation and evidence collection of the associated controls.


How does the shortage of skilled cybersecurity and AI professionals impact Australia’s ability to manage cyber risk effectively?


I think the shortage is not necessarily related to technical skill in those domains, but rather skilled risk professionals who can perform the translation between business and technical layers. Ultimately the management of risk is providing information to key stakeholders (typically risk owners) so they can make the most informed risk-based decisions.


The challenge that a lot of organisations typically face is not having all the information to make these informed decisions, or having this information presented in a way that leads them down the path of making the wrong decision (over or under managing risk).


The focus and messaging I normally provide people is to continuously invest and improve in not just the maturity of risk management practices (which cyber and AI risk management would be part of) but to look at how the information is communicated consistently in a way that makes sense for all business and technical stakeholders. This is why we are seeing continuously growing interest in areas like Risk Quantification type models.


What is the outlook for cybersecurity and cyber risk growth in Australia over the next five years, especially as AI adoption accelerates?


Interestingly, one of the challenges that we still have to date is third party risk. As we saw in 2025, quite a number of large profile breaches actually arose from compromises to third parties impacting the primary organisation. When we overlay that with what we are seeing with AI – suddenly there are a lot of new “businesses” that are being created through the usage of AI as well as existing organisations including AI type capabilities (where they often rely on third parties for this capability).


We are entering a stage where the already extreme challenge of managing third party and fourth party risk is going to become exponentially more difficult. For organisations that need to assess these types of businesses, they will typically fall into their second tier of risk (i.e. medium or low risk tier) which currently wouldn’t receive much attention if they are even assessed at all.


Thankfully emerging approaches like the adoption of the SMB1001 standard for Small and Medium Businesses which specifically target these high volume but lower risk tiers will be a very good approach for tackling this problem. As SMB1001 provides a tiered certification model that is globally recognised, organisations can augment their existing third party risk models to be able to request certification rather than perform traditional questionnaire based assessments.


New businesses who don’t have the maturity or ability to justify attaining ISO27001 or event SOC2 can leverage the SMB1001 certification in

Comments


bottom of page