top of page

Human vs. Machine? Why That's the Wrong Question in Enterprise Automation with Greggory Elias

  • Writer: Juan Allan
    Juan Allan
  • Dec 23, 2025
  • 5 min read

Greggory Elias reveals how AI is reshaping jobs, the pitfalls of enterprise adoption, and why the US is racing ahead in AI regulation



What if the future of work isn't about human vs. machine, but about experts who can instruct AI and interns who can execute with it? This is the compelling vision of Greggory Elias, who argues that the most valuable skill tomorrow will be the ability to deconstruct complex tasks into AI-manageable workflows.


As the founder of AgentsForHire.ai and a consultant to over 25 enterprises, Elias has navigated the frontlines of corporate AI adoption. In this interview, he cuts through the hype to reveal the real challenges, from engineering ego to regulatory philosophy, and outlines the new human roles emerging in the automated age.


Interview with Greggory Elias


How is AI and automation changing jobs around the world?


I think the model for companies in the future will be to hire experts and interns. There is almost no value in paying someone top dollar to understand a task at the 75% level when AI can also do something at that level. What this means is that you need experts who know exactly what a deliverable should look like, what the client is looking for, and how to explain how to do a task.


If you can explain exactly how to do a task like you would to a 5 year old, then an LLM can understand it. This means you need to break the task up into specific steps that mirror the

limitations of AI. The ability to break up a task like a consultant or expert into a workflow, based on first principles thinking, is going to be a key skill for jobs of the future.


On top of that you need people who are eager to learn and just do the work with the help of AI and manage the systems.


What are the biggest challenges companies face when adopting AI?


Companies are currently successful at deploying AI as a thought partner already, but for use cases that go beyond chatbots there are many obstacles.


  1. Ego: Many engineers and CTOs want to do everything, especially at companies under 1,000 employees. At these companies, engineers often show a (vibe coded) prototype that gets to 40% rather quickly. Unfortunately, computer science and ML / LLM engineering are different skill sets. These companies often spend years on projects with bad ROI subsidizing the upskilling of engineers who often leave once they acquire skills.

  2. Understanding the Technology: Companies don’t understand how the technology works, so they don’t know which problems to tackle and what data or processes are needed to be successful in handing off more processes to AI. E.g. LLMs don’t do math but so often companies ask LLMs to rank answers that inevitably just follow a bell curve of what an answer should look like while the answer has no mathematical bases.

  3. Talent / Expertise: Turnover of engineers is among the highest of all workers. It often takes 3- 6 months to hire AI talent and the price of ML / AI engineers can often be over $200k. This is why I explain to companies and prospects why it is often better to buy than build themselves.

  4. Pace of Advancements: The pace at which new models and innovative solutions are coming out within the AI sphere is overwhelming for most companies. New models, tools and frameworks are coming out every day and companies can’t keep up. Additionally, when you try to implement a new model or framework there is a risk that it doesn’t work within the systems you built.


What risks come with increased automation, and how can they be managed?


One risk that comes to mind from using AI on a daily basis and building lots of automations, is that LLMs are not deterministic. This means that with a given input you don’t get the same output each time. This means that for areas where you need determinism that you should not be relying on LLMs, or at the very least you need to build in deterministic layers into your

architecture for exact results.


As models get updated, the outputs can change completely because the new model may need to be prompted differently.


Another issue I see is that LLMs are not good at following step-by-step instructions, as LLMs cannot follow rule based instructions. Even with hard coded instructions and workflows you can still only get so far. To get LLMs to perform consistently, you need a multi-agent structure with an operator and sub agents that are specialized for specific tasks. In such an architecture each sub-agent has clear instructions and tools, and the orchestrator understand the available sub agents and tools for routing requests. This is the architecture we favor at AgentsForHire.ai.


How are governments regulating AI today, and is it enough?

The US has taken an approach to technology that it has taken throughout history by embracing creative destruction. It has decided that it’s important to win the race and have the leading companies in the AI sector and to let the industry police itself.


The philosophy with regulation in general is to let the players police themselves and to create regulations once the industry is more established or something goes wrong. The US is prioritizing winning the AI race over China, and has decided not to create constraints that would set the industry’s development back a few years as it perceives that as a competitive disadvantage to China.


In contrast, Europe has a very cautious approach and wants to create frameworks for the technology based on their current laws and regulations which favor privacy and equitable outcomes. Some of these regulations are quite difficult for AI companies to follow because even the leaders at companies like Open AI and Meta can’t explain all the details of how LLMs work.


I think that while the approach of the United States in lacking in protections for citizens and creators, that the economic and national security risks of not winning the AI race outweigh the costs.


What should countries do to balance innovation, safety, and jobs in the age of AI?


The best approach would be start with an understanding of what ML / AI and LLMs do well and do poorly. You need to quantify what the impact of a decision made by an automated system on human outcomes.


For example if you are creating a system that impacts the healthcare, financial outcomes or jobs of individuals, the cost of failure and human impact is very high. To design such a system you’d want to lower the threshold or tolerance so that the model is biased to outcomes that benefit society.


With the money saved with automation you can increase the amount of positive outcomes. In this example I’m describing, if you are deciding on approving treatments or allocating healthcare resources, you should tune the model and thresholds to increase the amount of positive outcomes.


On top of that when a decision is unclear you need to elevate the decision to a human or have the ability for the end user to escalate the decision to a human for review.


Some countries like the EU have made rules that state when an automated system impacts a human in certain areas, that the decision should auditable and explainable.


In terms of the impact on jobs and unemployment, if information on AI and how to implement it were universally distributed, I think that unemployment would increase 20% overnight. From working in AI for 9 years I think this process will take longer than people think and ideally over that time we transform our education system.

Comments


bottom of page