10 Myths About Autonomous Networks
Disclaimer: This article was first published in Luqman Shantal’s LinkedIn profile, CEO of Makman and Co-chair of MAMA project for Measuring and Managing Autonomy.
I have been a practitioner of this craft for six years. Most weeks, I have the same conversation three times. Once with a client. Once with a vendor. Once with a software provider. The conversation is about what Autonomous Networks actually is, and why their assumption about it is wrong.
The term was coined in 2019, a year before the pandemic remade everything, by a TM Forum working group I sat in. We meant something specific: a measurement of how much operational responsibility had transferred from people to systems, scenario by scenario. Six years later the term has become the hottest topic in our industry’s conferences. Operators are pursuing level four validations. Vendors are positioning around it. Software providers keep asking what they need to build. Almost no one knows what Autonomous Networks is.
This article is the conversation I have been having three times a week, written down once. Ten myths show up over and over. Here is the simplification, then the ten.
Picture a workforce analyst whose entire job is measuring how work is distributed between managers and their teams. The analyst watches how tasks transfer. Watches where the handoff fails. Recommends fixes. The analyst does not build the headcount model. Does not redesign the org chart. Does not define the role taxonomy. Does not run the capability assessment. They do one thing: measure delegation between people.
Autonomous Networks (AN) is the same role, scoped to a different object. The “manager” is the operations team. The “team” is the operational systems. The role watches how work transfers from people to systems. Watches where the handoff fails. Recommends fixes.
That is the work. Ten myths show up around it.
Myth 1: Autonomous Networks (AN) are only about the network
The brand name is misleading. AN, like AV (autonomous vehicles), is named after the most visible element but covers more than that element. AV includes the dealership, the workshop, and the regulatory layer that surrounds the vehicle. AN covers three layers: the resource layer (the network), the service layer, and the business operations layer.
ITIL was originally the IT Infrastructure Library. The acronym stayed; the explanation quietly stopped being unpacked. ITIL now describes service management across industries, and the meaning of the letters has receded into the background.
eTOM was the Enhanced Telecom Operations Map. The acronym was avoided for a while, then returned with a different framing: the business process framework.
AN will follow the same path. The letters stay. Their meaning widens. The “Networks” qualifier will fade the way “IT Infrastructure Library” faded inside ITIL. AN remains AN; the N stops pointing only at the network.
For now, the name stays. The scope does not.
Myth 2: Autonomous Networks and Autonomous Operations are the same program
They are different programs with different sponsors and different scopes.
AN is the focused closed-loop program. It measures how much operational work has transferred from people to systems, scenario by scenario. The CTO is its natural sponsor.
Autonomous Operations (AO) is the wider transformation program. It looks at all the enablers around the closed loop: operating model, capability maturity, data foundations, culture, technology readiness. The CEO is its natural sponsor.
AN is the depth instrument. AOMM (the Autonomous Operations Maturity Model) is the breadth instrument. AOMM has six dimensions: two are core (Operations and Party, both centered on the closed loop) and four are enablers (Strategy, Technology, Data, Culture). The core dimensions measure delegation in operations and the people who run them. The enablers measure the conditions that allow delegation to scale.
Each AOMM dimension has a natural owner at the C-level. Operations and Party sit with the CTO/CTIO, where the closed-loop work happens today. Technology is also the CTO’s, as an enabler rather than as the closed-loop core. Data sits with the CIO. Culture sits with the Chief People Officer. Strategy sits with the CSO. No single C-level peer can audit across all six dimensions without political friction, because four of them belong to peers. The CEO is the only role that sits above the dimension owners. That is why a full AOMM program needs CEO sponsorship: the alternative is friction, not insight.
Myth 3: Autonomous Networks are architecture work
Architects design the systems. AN watches how delegation moves through them.
The architect’s frameworks (TOGAF, ODA, eTOM) describe how the architect does their work. AN does not produce those artifacts. The architect’s input source is the operations people who already speak the language of operations. Two different crafts with two different practices, and no overlap between them.
Myth 4: Autonomous Networks are technology selection
Technology is auxiliary in this role. We do not pick the agentic framework, design the digital twin, specify the machine learning algorithm, or evaluate the data platform vendors. Those are different specializations.
What we do say about technology is what kind of help is needed at each level of delegation. Auxiliary tools that assist on individual tasks at level one. Static rules that trigger actions automatically at level two. Policies that specify a goal and apply the right rules conditionally at level three. AI models that drive the work and predict what is coming at level four.
Software providers regularly ask what they need to build. We answer in classes of capability, not in product specifications: tools that assist on individual tasks, rules that trigger automatically, policies that adapt to conditions, AI models that drive the work. The class signals what to build. The product is theirs to shape.
That is the language we use. The specific products and architectures are someone else’s work.
Myth 5: Autonomous Networks should produce target and source eTOM process designs
RFPs sometimes ask AN specialists to produce target and source process maps in eTOM. This is not AN scope. It might be part of the wider AO scope when the operating model is being redesigned, but AN itself does not redesign processes.
AN x-rays the existing process through the lens of delegation. We map who does which step today (the worker or the system) and what the level of delegation is at each closed-loop stage. We do not redraw the process. The process is already there. We measure how much of it has been delegated.
When the architecture or operating-model function does its work, they may produce target and source process maps. We give them the requirements about delegation. They produce the maps.
Myth 6: Autonomous Networks claim credit for KPI improvements
Operations teams improve their KPIs through their own work, with or without delegation. We do not claim full credit for the KPI movement. There is attribution to share with operational changes, architecture upgrades, training programs, and many other levers. What we measure is how much of the value moved through the delegation lever, because that is the lever that scales.
Myth 7: Use cases are the unit for Autonomous Networks
Companies in the Americas, Europe, Asia, and the Gulf keep asking us to deliver use cases as if those were the unit. They are not. The IOH CTO put it on TM Forum’s podcast clearly: “We are not focused on use cases. We are focused on fixing the platform.”
When you remove the architectural ceilings, your level rises across many use cases at once. Adding frequency bands enables the multi-radio-access-technology sub-scenario in energy efficiency at level four. Upgrading from TWAMP to iFit improves the awareness stage in service quality. Moving from segment routing to SRv6 improves execution. Adding telemetry streaming where polling used to be improves what the system can analyze. None of those are use cases. They are platform moves that lift the autonomy ceiling.
The framework was inherited from SAE in autonomous vehicles for a reason. We observe the driver, not the manufacturer. We watch how delegation moves from the human to the car at each level: basic cruise control, advanced cruise control, traffic jam autopilot, robotaxi. The intelligence and the systems are the manufacturer’s work. The handoff at the wheel is what we measure.
Myth 8: We should run the full AOMM enterprise assessment from day one
Not in 2026.
The industry started AN with the driver of the car: the operations team. That is where the technology is feasible today and where the ROI is clearest. The operations dimension of AOMM is the most mature dimension because that is what the industry has been working on. It covers technical and service operations. It does not yet cover BSS, core commerce, or the dealer side of the network.
Much of the core transaction processing in BSS is deterministic by design: billing, charging, order management. Deterministic work does not need autonomy in the same way operational closed loops do. Determinism is the right tool for those flows. That is part of why BSS sits lower on the autonomy priority list. The non-deterministic BSS flows, including offer management, churn prediction, and fraud detection, will move up the priority list as the AN scenario map matures into the business operations layer.
The Party dimension follows the same pattern. The other four dimensions (Strategy, Technology, Data, Culture) are enablers and they are not yet at a maturity level where a full enterprise audit produces actionable insight.
Roughly 80% of the industry today scopes AOMM to its Operations dimension only. That is the right scope for 2026 because the technology feasibility and ROI lie there.
If you are the CEO and want a cross-organization AOMM audit, the technology may not yet support your claim across all dimensions. Better to scope to where the keys actually open the doors. The driver of the car is the right starting point. The salesman at the dealership comes later, as technology matures.
There is also a sponsorship reason to scope. The full AOMM audit cuts across dimensions owned by different C-levels. Without the CEO’s authority behind it, the assessment cannot speak honestly about peer dimensions without creating friction. A CTO sponsor gets Operations and the Technology enabler. A wider audit waits for a higher-authority sponsor to commission it. We do not run an enterprise assessment without the enterprise’s senior owner asking for it.
Myth 9: Planning, design, and deployment deserve equal priority in Autonomous Networks
They are in the AN scenario map. That part is real. But they are not on the high-value scenario list, and that is not an oversight.
The current high-value scenarios are concentrated in operations: fault management, optimization, change, service assurance, customer care. The driver of the car. Operations is where the technology is feasible today and where the ROI is clearest in 2026.
Planning, design, deployment, inventory, and resource readiness have their own closed loops with different feasibility curves and different ROI. Some of that work can be autonomized, particularly the parts that are largely software-driven. Those cases will move up the priority list as the technology matures.
For now they are second-priority work, not the focus. The driver of the car comes first. The planner of the factory and the deployment engineer come later.
Myth 10: Autonomous Networks require a new competency framework and process redesign
People teams asked to support AN programs often start by trying to recreate the competency framework. They do not need to. The skills are the same skills, and so are the processes. eTOM does not need to be redesigned because of AN. We x-ray the existing process through the lens of delegation. We do not redraw it.
What changes for the worker is small but specific. Less time spent doing the task, more time spent validating what the system did. The worker becomes a checker at the edge, providing trust through judgment. The experience the worker built over years matters more, because that experience is what catches the AI’s mistakes. The role does not vanish. It moves up.
Why this matters
Most of the confusion in our industry comes from collapsing this specialization into one of the bigger boxes. RFPs, architecture practices, use-case demands, competency redesigns, process re-mappings, enterprise audits scoped beyond what the technology supports: all of them try to make AN look like something it is not.
AN should produce one thing only: a clear picture of how much work has actually moved from people to systems, and what would close the rest.
Get the role right and the program works. Get it wrong and you get confusion at premium prices.
Subscribe to the Level 4 Autonomy Insights newsletter, by Luqman Shantal, to stay informed about all things Autonomous Networks.