AI translation is changing the translation industry fast — but “dragging it down” is only true in certain parts of the market.
The most accurate picture is that the industry is splitting into two realities:
- Commodity translation (high-volume, low-risk content) is being automated and price-compressed.
- High-stakes translation (legal, medical, regulated, brand-critical work) still requires humans for accountability, context, and risk control — and continues to command value.
Even the market-level numbers reflect that “split”: the industry is still large and still growing, but growth expectations are being revised as automation reshapes pricing and workflow. Nimdzi estimates language services reached USD 71.7bn in 2024 and projects USD 75.7bn in 2025, while noting a shift to slower growth than pre-AI expectations.
So the question isn’t “Will AI replace translation?”
It’s: Which translation work is being commoditised, which work is being elevated, and how should buyers and providers respond responsibly?
1) The market is growing — but working conditions are polarising
If you look only at demand signals, translation isn’t disappearing. Smartling’s 2024 report highlights translation volumes up 30% year-on-year, with more businesses planning to implement generative AI.
But if you look at the lived experience of many professional translators, the story can feel very different.
In the UK, the Society of Authors reported (Jan 2024 survey, published April 2024) that:
- 36% of translators had already lost work to generative AI
- 43% said their income decreased in value due to genAI
- 77% expected future income to be negatively affected
CIOL’s Translators Day survey (March 2025) adds nuance: 37% reported less work, while the rest reported similar or more — suggesting impact varies by language pair, niche, and client base.
What this points to: demand may be rising overall, but the distribution of value is changing — with greater pressure on generalist, high-volume translation and more opportunity in specialist work.
2) MT post-editing is becoming the default — and that’s where a lot of “downward pressure” comes from

One of the biggest structural shifts is the rise of MTPE (machine translation post-editing) — where a human corrects AI/MT output rather than translating from scratch.
Nimdzi reports that in 2024:
- 62.6% of LSPs had more than 30% of projects as MTPE (up from 29.1% in 2022)
- 45.2% used MTPE for at least 50% of projects (up from 7.8% in 2022)
This matters because MTPE is often priced differently — and not always in a way that reflects real effort or risk. Academic work on MTPE pricing practices shows how complex and contested “fair pricing” can be when effort varies widely by text quality and domain.
A crucial misconception: “Post-editing is always faster”
Sometimes it is. Sometimes it isn’t.
CIOL published an analysis highlighting that looking only at average speed can mislead: in one English→Polish dataset, post-editing was on average 4% slower than human translation in 89% of tasks, despite averages suggesting otherwise.
Why that happens: when AI output is “almost right” but wrong in subtle ways, correcting it can be cognitively demanding — especially in technical, legal, or sensitive content where small errors carry big consequences.
3) Quality has improved — but reliability is not the same as fluency
Modern systems can produce text that looks polished. The risk is that it can still be incorrect, incomplete, or contextually wrong — and those errors can be hard to spot quickly because the output sounds confident.
Research using eye-tracking in post-editing workflows repeatedly shows that effort is not just about time: cognitive load changes depending on MT quality, text type, and task conditions (for example, medical texts for patients).
Professional bodies are also warning against “AI by default” in sensitive settings. AUSIT’s 2025 position statement stresses that machine output can be less reliable (particularly for some languages), and that post-editing may be more labour-intensive than translating from scratch depending on the text and quality of the output.
Bottom line: AI output can be fluent, but fluency is not proof of accuracy.
4) Why some translators aren’t adopting genAI (even while enterprises push it)

There’s a visible gap between enterprise localisation teams adopting AI and many individual professionals being cautious.
The Institute of Translation and Interpreting (ITI) reports a survey of members where 83% were not currently using generative AI in their work, while 17% had begun incorporating it — alongside concerns and uneven readiness.
This makes sense: translators carry professional responsibility for quality, confidentiality, and downstream consequences — and many client documents contain personal data.
5) The “hidden issue”: confidentiality, personal data, and compliance
Translation projects often involve personal data (IDs, medical records, legal documents). That creates compliance obligations under GDPR/UK GDPR and client confidentiality expectations — and those obligations don’t disappear because a tool is “just translating”.
An ATC/EUATC guidance document on GDPR and personal data in translation highlights how translation frequently involves cross-border processing and “incidental” personal data that clients may not even realise is present.
There’s also a growing regulatory environment around AI systems themselves. For example, the European Commission issued guidelines clarifying obligations for general-purpose AI model providers under the EU AI Act, with obligations entering into application for providers from 2 August 2025. Reuters coverage also notes transparency and copyright-policy expectations for foundation/GPAI models under the EU’s framework.
Practical implication for buyers: you need to know whether your vendor is using AI, what data is being sent where, and what happens to it.
6) Standards exist for a reason: “AI + human” can be done properly
If a buyer wants MTPE, the best practice approach is to treat it as a defined service — not a shortcut.
Two standards matter here:
- ISO 18587:2017 — requirements for full, human post-editing of MT output and post-editor competence
- ISO 17100:2015 — requirements for delivering a quality translation service, including processes and resources
You don’t have to be certified to learn from the logic: define scope, define quality requirements, define revision and QA steps, and assign accountable humans.
7) What AI is doing to language skills and education
This matters for the medium-term health of the profession.
A UK HEPI note on language learning warns of a “vicious cycle” of declining uptake, leading to cuts in provision and degree programmes, risking a national skills deficit. Mainstream reporting has also highlighted universities axing language degrees and departments amid changing demand and perceptions that tools can substitute for learning.
Academic economics commentary suggests AI translation improvements can reduce incentives to invest in bilingual skills in some contexts, though impacts vary by sector.
This doesn’t mean “humans won’t be needed”. It means we may face fewer highly proficient linguists over time — which could make genuine expertise rarer (and more valuable) in high-stakes areas.
8) A practical decision guide: when AI translation is appropriate — and when it’s risky

Here’s a simple rule that works in real life:
AI-only (no human review) is usually acceptable for:
- Internal understanding (“gist”)
- Low-risk content with no legal/medical consequences
- Fast, disposable drafts that will be rewritten and verified
AI + human post-editing can be appropriate for:
- High-volume content where style risk is manageable
- Content with strong terminology control and clear reference materials
- Projects with defined MTPE scope, QA checks, and accountability
Human translation (with revision) is strongly recommended for:
- Legal documents, immigration, court, contracts
- Medical/clinical content or patient-facing instructions
- Certified/notarised/official submissions
- Brand-critical copy (tone, persuasion, nuance)
9) Procurement checklist (copy/paste for clients)
If you publish this piece, including a checklist like this increases trust immediately:
- Disclosure: Will any part of my content be processed by third-party AI/MT tools?
- Data handling: Where is the data processed and stored? Any retention/training on customer data?
- Confidentiality: NDA availability and internal access controls
- Quality model: Who is accountable for final output, and what QA steps are used?
- Standards alignment: Are workflows aligned to ISO 17100 / ISO 18587 principles?
- Fitness-for-purpose: What is the use case (internal vs official), and what error risk is acceptable?
10) So… is AI dragging the industry down?
It’s dragging down margins in commodity translation and destabilising many translators’ income — and the evidence from translator surveys supports that.
But it’s also expanding translation volume and changing workflows, pushing the industry towards scalable models where humans focus on higher-risk decisions rather than first-draft production.
The most defensible conclusion is:
AI is not ending translation. It is re-pricing it, repackaging it, and raising the bar on accountability.

FAQ section (publish-ready)
Will AI replace human translators?
Not fully in any setting where accuracy, liability, confidentiality, or brand nuance matter. What’s changing fastest is routine, high-volume content — often moving into MTPE workflows.
Is post-editing easier than translating?
Not always. Depending on MT quality and domain, post-editing can be cognitively heavy and may be slower than translating from scratch in many tasks.
Why do some professionals avoid genAI tools?
Because of risk: privacy, client confidentiality, and the difficulty of verifying subtle errors at speed. This caution shows up in professional-body surveys.
What’s the safest approach for organisations?
Use a tiered model: AI for low-risk internal content, MTPE where appropriate, and human translation with revision for high-stakes work — supported by clear process controls.
