Almost every ambitious engineer or student I meet lately asks the same anxious question: 'Do I have to learn machine learning or LLMs, or risk becoming obsolete?’
This anxiety stems from a critical lack of clarity about the differences between using LLMs, engineering LLMs, and specializing in Machine Learning.
As someone who has led large-scale ML and LLM organizations at Google Brain and Turing, I want to clarify something important: these are not interchangeable skills.
Knowing the difference can save you years of wasted effort—and help you build a career that is truly resilient. I will break down exactly what each skill involves, why it matters, and where to wisely invest your time.
What Are They?
Before diving deeper, let's clarify the confusion directly:
Using LLMs: Like email or search engines, leveraging LLMs such as ChatGPT in daily tasks. Think of software engineers adopting LLMs as adding a basic tool to their toolkit—not specialized engineering. This will soon be essential for most roles.
Typical role titles: any role - student, software engineer, program managers
Engineering LLMs: This involves integrating existing LLM models into products: using APIs, handling model outputs in real-world software, and managing LLM-driven systems. It does not require deep theoretical ML knowledge or training models from scratch.
Typical role titles: software engineers, fullstack engineers
Specializing in Machine Learning: This means going deep into one specific ML subfield (like NLP, vision, or recommender systems). It demands rigorous theory, significant investment, and specialized expertise.
Typical role titles: software engineer (ML), machine learning engineer, research scientist
Misunderstanding these differences leads engineers into a career trap: investing critical years into a specialization that can narrow, rather than broaden, your long-term career opportunities.
Using LLMs is The New Baseline
Let’s get one thing out of the way: It is now standard practice to effectively use LLMs in your daily tasks. In my experience, professionals who master everyday use of LLMs show significant productivity advantages compared to those who do not.
I consistently see engineers adept at LLM prompting quickly produce clearer documentation with accurate grammar, instantly generate precise meeting summaries with actionable items, and unblock themselves faster during coding and debugging. These seemingly minor efficiencies compound dramatically, clearly distinguishing strong performers from peers who have not adapted.
These daily efficiencies add up—rapidly. Becoming skilled at leveraging LLMs is no longer optional; it is the new baseline for productivity.
I predict that using LLMs will be a hard/preferred requirement in a regular non-tech job just like knowing how to use Word and Excel.
Engineering Foundations and Grit
While mastering everyday LLM usage is now essential, rushing directly into specialized LLM engineering or machine learning too early in your career can backfire by lacking good engineering foundations.

Early career growth primarily comes from developing robust foundational engineering skills: coding, resilient system design, rigorous debugging, and relentless problem-solving. You build these skills by wrestling repeatedly with real-world complexity—debugging intricate production outages, architecting scalable backend services, and refining judgment under pressure. Such wisdom cannot simply be replicated by mastering prompting an LLM, because it is about knowing which questions to ask, how to interpret ambiguous signals, and synthesizing hard-earned lessons from lived experience—not just ingesting infinite information.
Consider my own experience at Turing, I had to navigate stakeholder expectations around our critical email infrastructure which meant understanding technical debt, balancing urgent timelines, regulatory compliance, system reliability, GDPR implications, and protecting customer trust. No LLM could autonomously decide which compromises were acceptable, what trade-offs to communicate, or how to balance competing demands effectively—those judgments came from hard-won experience and rigorous thinking.
There is a precedent we can draw from. The famous match between AlphaGo vs Lee Sedol in 2016[1] had a profound impact from AI on the Go community by pushing the ELO ratings far higher than what was thought possible. Yet, as AI pushed the limits of what was possible, even top Go players still practice continuously to refine their skills[2][3]. Mastery never comes for free—whether in Go or engineering—it demands constant practice, struggle, and persistence.
Take this Senior Staff Engineer (LLM Product) role that I am hiring for. The interview loop does not even include anything about the usage of LLMs, but rather your past experience as an engineer and a leader. (Yes I won’t ask you if you had done LLM prompt…)
Bottom line: Do not skip the grind that builds robust foundation engineering skills. The grit developed through repeatedly solving difficult, real-world engineering problems is exactly what builds your foundational engineering skills. These foundational skills—not early mastery of trendy specializations—determine your long-term resilience and career strength.
Engineering LLMs, a powerful tool built on strong foundations
Engineering LLMs, or just plain ML models, is actually the 95% of practical engineering work beyond just ML algorithms[4]. While you might collaborate closely with ML specialists, deep theoretical ML expertise is not required.
Critically, your effectiveness in engineering LLM solutions relies on strong foundational engineering skills: debugging complex systems, disciplined design, and rigorous problem-solving. Effective engineers in this space rely upon these foundations, not the other way around. The best researchers in Google DeepMind that I have seen were those who could translate their ideas into real engineering code and business impact.
However, robust engineering foundations are valuable beyond just LLM engineering. The same foundational skills—architectural judgment, debugging rigor, and structured problem-solving—provide adaptability across countless specializations, from infrastructure to front-end development, databases, or security.
Bottom line: Engineering LLMs is just one powerful direction enabled by strong foundational engineering skills. Your long-term career resilience and flexibility depend on these foundations—not early specialization alone.
Specializing in Machine Learning, a deep investment with focused returns
When I refer to specializing in machine learning, I mean deep and rigorous expertise developed within a specific subfield, such as natural language processing, computer vision, recommender systems, or reinforcement learning. This specialization is fundamentally different from engineering machine learning models which concerns about integrating existing ML or LLM models into software products.
Specializing in machine learning involves significant investment: mastering theoretical foundations, mathematics, statistics, algorithmic design, and substantial hands-on experience with model training, evaluation, and optimization. It demands a high degree of commitment, often involving years of focused study and practice. However, this deep expertise is invaluable for roles explicitly centered around machine learning innovation—such as ML research, large-scale model development, or production-level ML system design.
It is crucial to recognize clearly that deep ML specialization is neither a shortcut nor universally necessary for every engineering career. Rather, it is a targeted path best suited for those genuinely passionate about pushing the boundaries of ML technologies themselves, not merely using them. For everyone else, developing a robust foundational engineering skill set first is still likely to yield better long-term career resilience and flexibility.
Bottom line: Pursue deep machine learning specialization thoughtfully and intentionally. While it can unlock exciting career paths, its value remains closely tied to the clarity of your professional goals, rather than being broadly applicable to all engineering trajectories.
Bringing It All Together: Build Foundations First, Then Specialize
Almost every ambitious engineer or student I meet lately asks the same anxious question: "Do I have to learn machine learning or LLMs, or risk becoming obsolete?",
This anxiety stems from confusion about the differences between using LLMs, engineering LLMs, and specializing in Machine Learning. Here is the reality clearly summarized:
You must learn practical, everyday usage of LLMs—just like you learned email or search engines—to maintain baseline productivity.
You must also build strong foundational engineering skills first—coding, rigorous debugging, robust system architecture, and the grit to persevere through challenging problems.
With these foundations solidified, you can then confidently pursue deeper specializations, whether it is engineering with LLMs or deep Machine Learning.
Your long-term resilience as an engineer depends on clearly understanding these distinctions. Foundations and grit first, specialization second. It is not about chasing trends, but thoughtfully layering new skills onto your existing strengths. That is how you truly future-proof your engineering career.
Hopefully now you can get that out of your head and can get some good sleep.
If this article helped you to get some good sleep, please subscribe to this substack to support me! Thank you so much!
References
[3] At time of writing, top 1 player Shin Jinseo showed marked increase in ELO rating after 2016, from going 35xx to 3890. So did Wang Xinghao from 3417 to 3713 and Ding Hao from 34xx to 3677.
[4] Machine Learning: The High-Interest Credit Card of Technical Debt
Thanks for the insights. Would love to hear your opinion on what are some traditional workflow/collaboration models in Engineering that are less relevant given recent AI advancements and should be revolutionized (maybe the next article)?