
Blue Planet’s Kailem Anderson on the need for data and AI standards
As part of our TM Forum Ten100 series of interviews, we caught up recently with Kailem Anderson, VP of Global Products & Delivery at Ciena’s Blue Planet division, to get his insight into the biggest challenges operators are facing as they adopt cloud-native technology and AI. He sees development of standards around data management and agentic AI as two of the most pressing. (Editor’s note: this interview has been edited for length and clarity.)

Q: What are the biggest operational challenges CSPs are facing right now, and what are some common mistakes they make on their automation journeys?
A: The first one is sunsetting legacy systems. A lot of inventory systems that have been around 30 to 40 years are built for a static world – they’re not built for dynamic, real-time operations. But operators still struggle to sunset them and move on because these systems are so entrenched in their environment, with spaghetti integration around them.
The second one is getting to this autonomous network. It really is the intersection of three key themes. First, it’s all about data integrity – as we like to say, ‘You can’t AI, what you can’t see’. The second is this concept of intent-based networking, or declarative model, where you bring in state and relationship into your data so that you’ve got context awareness. And the third one is how you overlay AI or agentic capabilities on top of all that.
Another issue is developing an incremental approach to digital transformation: having a playbook on how you get to your future state. Focusing on the data is the first step. Then, you can layer-cake in your orchestration, agentic AI and closed-loop actions. But we still see a lot of operators struggling with how to divide this into a set of incremental steps.
Q: Where are the biggest stumbling blocks when it comes to organization and culture?
A: Transformation and operational challenges are a combination of people, process and tools. The people aspect always gets kicked down the road. You can have the best technology in the world and the best processes, but if you don’t deal with your organizational silos, you’re not going to get anywhere.
I’m really passionate about this because I live it every day. The three groups that are involved in the day-to-day management – the network team, the IT or the OSS team, and the operations team – all operate in silos, and they basically are stitched together by manual processes to hand things off.
With the introduction of AI and when you start looking at intent-based models and declarative models, there is a real opportunity to collapse some of these organizational hierarchies that are just stovepipe silos. There’s no reason why agentic AI can’t fulfill the needs of converged IT and operations teams or converged network and IT teams moving forward. Those organizations were born out of necessity, 25 to 30 years ago. But I do think they stifle innovation because you sell to one of those buying centers and then they implement things vertically, not horizontally, across the business.
Breaking down silos is really difficult because that culture and institutional bureaucracy is ingrained in the business. It’s a big transformation and there are a lot of politics around it within each operator. We still see fighting internally within the operators, but I would like to think that with new technology like AI they will start thinking outside the box. This is an opportunity to try and collapse and converge some of these functions together.
Q: Considering the long list of challenges CSPs are facing, where should they start?
A: Data is that first step. The cold, hard reality is that in most operators' data is spread across so many different places. It’s in legacy inventory systems they can’t get it out of. It’s in network management systems. It’s in SDN [software-defined networking] controllers. Let’s be honest: it’s in Excel spreadsheets also. And so, accessing that data so that they can weaponize it and use it is a big problem.
Having a data management strategy is absolutely critical – and having an ability to federate data from their various sources because you’re never going to have a single source of truth. If you don’t know what the resources are and roll that up into some type of services plan of record, you’re really going to struggle to drive automation.
How are you going to apply AI if you don’t know where the source data is? AI needs data. So, the foundation of having a data management strategy and a data governance strategy that looks at the lifecycle of your data is absolutely key.
Q: Among operators who are leaders in this area, how are they handling AI and data governance? Is it necessary to set up a center of excellence, or should AI governance be decentralized?
I think it's a bit of both, actually. The control and the policy needs to be centralized, particularly in terms of that data integrity and governance piece. But then you have to empower the teams underneath it to experiment, to play and do what they need to do. I haven’t seen any operator that tries to implement a centralized AI execution strategy be successful. It just slows down the business.
Q: In my research on data architecture, I’ve been hearing a lot about ontologies and making sure that everybody is on the same page when they’re referring to concepts like data products, data owners and data consumers. Is this an area where we need standards?
A: Absolutely. I didn't use the term ‘ontology’, but for me, that's exactly what it is: being able to link your data and have that context awareness, or that data graph where you can sort of visualize the state and relationship. The data is living, and that’s where the orchestration and fulfillment vendors are really important, because we deal with stateful data, and that data changes. And when the data changes, you need to know the impact of it. That’s very different to taking data and just dumping it out to a data lake.
As soon as you start applying AI to data that is stateful and context aware, then you’re really going to start to get some of the benefits of the use cases, particularly around multi-layer where you’re trying to link, say, Layer 0 and Layer 3 together.
But we need to do this in a standards-based way, so that people aren’t reinventing the wheel. That’s the challenge.
Q: How does intent-based networking use data and AI?
A: Intent-based networking is about creating a data fabric layer that links your planning systems with your assurance and fulfillment systems, where you link state and relationship into that data model. But today, each operator implements state and relationship a bit differently, so you don’t get a commonality across operators because there is no standard for data integrity.
But I’d like to think that some of the work that is starting to play out in the TM Forum today will help drive standards. I think if you get the data plane right with the right state and relationship, you can define multi-layer loops through your models.
We need to start pivoting from APIs and NaaS and hierarchical models, which are pretty well thought through these days, to the data integrity and state and relationship issues. Figuring out how operators can implement intent-based networking in a consistent way is sort of the next big bastion.
Q: What are your thoughts on agentic AI and the use of Model Context Protocol (MCP) and Agent2Agent Protocol (A2A)? Where should the standards work be done on these concepts?
A: AI is changing so quickly. Everybody has adopted MCP as sort of the southbound communication, and everyone’s looking at A2A in terms of agent orchestration. So, they’ve almost become de facto standards. But the industry is moving so fast that I would be worried to say that’s going to be the path moving forward, because something else could come in and wipe it out relatively quickly.
It will be very hard for operators to consume all these agents in a consistent way if we don’t really start dealing with agent orchestration – that A2A bit. Because I can tell you, I’m building an agent library of 50 to 100 agents over the next 12 months, and other vendors are going to be doing the same. Then, the operators are going to be saying, ‘Well, listen, I’ve got 50 agentic libraries now doing different stuff. How do I bring all that together and orchestrate it?’
There’s an opportunity here for standards, and I do think the work that the TM Forum’s doing – that kind of thinking around how you can bring the telecom community together – it’s absolutely needed.