Amdocs is leading the next wave of telecom AI innovation through a strategic collaboration with NVIDIA. By integrating advanced AI tools like the universal LLM NIM and the Data Flywheel into the amAIz Suite, we’re enabling service providers to scale smarter, faster, and more efficiently.
Amdocs leads charge in telco AI innovation through strategic NVIDIA collaboration
Amdocs is spearheading the next evolution of AI within the telecommunications industry. We’re continuing our strategic collaboration with NVIDIA, a global leader in accelerated computing, to advance and integrate their latest AI technologies into our amAIz Suite.
The partnership with NVIDIA underscores our dedication to equipping telcos with the cutting-edge capabilities to harness the transformative potential of AI and capitalize on the vast opportunities it presents. Our collaboration is already spanning crucial areas, lately we integrated Universal LLM and Data Flywheel - designed to address specific challenges within the telecom landscape:
Universal LLM NIM
The universal LLM NIM microservice offers a unified and streamlined approach to deploying LLMs, providing support for leading inference frameworks from NVIDIA and the community including NVIDIA TensorRT-LLM, vLLM, and SGLang. For telcos, this flexibility enables a consistent, simplified workflow, allowing them to reliably deploy their choice of open LLMs and specialized variants using the inference backend best suited to their model serving needs and infrastructure.
Key advantages of the universal LLM NIM microservice container include:
- Simplified deployment: Abstracts away much of the complexity involved in deploying LLMs, getting them up and running easier and faster.
- High-performance inference: Since it’s optimized for high-performance inference on NVIDIA accelerated infrastructure, it ensures low latency and high throughput.
- Broad model support: Supports models from popular sources like Hugging Face and those in TensorRT-LLM formats, providing access to a vast and rapidly expanding ecosystem of LLMs.
- Optimized by default: Includes smart defaults that provide optimized response latency and throughput without requiring manual configuration.
- Performance tuning: Offers simple options for performance tuning and enhancement, giving advanced users fine-grained control over inference parameters.
- Enterprise support: Part of NVIDIA AI Enterprise, it’s continuously managed and updated by NVIDIA with support for multiple inference backends.
With NIM’s integration into our amAIz GenAI platform, telcos can overcome the deployment hurdles associated with LLMs and drive innovation and efficiency, supporting telecom operators with a wide range of LLMs for diverse apps and AI Agents which amdocs deploys for Telcos, such as:
- Care & sales agents
- Network agents & automations
- Content creation
- Custom language translation

The Data Flywheel: driving efficiency with fine-tuned LLMs
Amdocs also recognizes the critical importance of continuous improvement in Amdocs AI Agents‘ performance and efficiency. NVIDIA’s Data Flywheel approach for self-improving AI models is central to this goal, using real-world data and feedback for continuous model tuning, evaluation, and deployment. At its core, the approach enables telcos to fine-tune smaller, more efficient models to achieve the accuracy of much larger models, significantly reducing both computational costs and latency. Furthermore, its ability to leverage signals from production traffic and direct user feedback is crucial in providing valuable insights into how the model performs in actual use and helping identify areas for improvement.
Data Flywheel’s methodology comprises several key stages:
- Data curation: The careful selection, cleaning, and preparation of data to ensure its quality and relevance for training AI models.
- Ongoing experimentation: The continuous exploration of different model architectures, training techniques, and hyperparameters to identify the optimal configuration.
- Rigorous evaluation: The thorough assessment of model performance using a variety of metrics and benchmarks to measure accuracy, reliability, and effectiveness.
- Iterative fine-tuning: The process of adjusting model parameters based on evaluation results to further improve its performance.
At Amdocs, we’ve operationalized Data Flywheel’s methodology, utilizing a sophisticated LLMOps (Large Language Model Operations) pipeline that leverages NVIDIA NeMo microservices and NVIDIA NIM to automate and streamline the various stages of the Data Flywheel process, enabling efficient fine-tuning of LLMs.
As an example, in specific tasks, fine-tuning a smaller model using the pipeline delivered a substantial performance boost, with accuracy on the test set increasing from 0.74 for the base Llama 3.1 8b Instruct model to 0.83 for the LoRA-fine-tuned version – even with only 50 training examples. This demonstrates the power of fine-tuning to achieve high accuracy with smaller, more cost-effective models.
Furthermore, the pipeline, which is built upon the NVIDIA AI blueprint for building data flywheels, is seamlessly integrated into our existing amAIz platform’s CI/CD (Continuous Integration/Continuous Delivery) pipeline. This tight coupling enables powerful evaluation and crucial regression testing for any newly released LLMs that are being considered for use by the amAIz platform. This means only the highest-quality and most reliable LLMs are deployed, and any potential issues are identified and addressed early in the development cycle.
By implementing the Data Flywheel, leveraging NVIDIA’s technologies, and focusing on efficient fine-tuning strategies, Amdocs amAIz Suite is establishing a continuous improvement loop for LLMs. This allows us to achieve enterprise-level accuracy with compact, economically optimized models optimized for reduced inference latency and total cost of ownership (TCO), making them ideal for scaling agentic applications. It also ensures our AI solutions are both cutting-edge and economically viable, remaining at the forefront of innovation and delivering maximum value to our customers.
amAIz for sovereign AI factories
Amdocs amAIz GenAI platform and amAIz agents are designed to seamlessly integrate with AI factories by leveraging high-performance infrastructure technologies from providers such as Dell and HP, along with NVIDIA’s AI Enterprise software, including NIM and NeMo. Building on this foundation, amAIz agents can autonomously manage customer care, sales, and network tasks, enabling telco-native operations that revolutionize customer engagement and operational efficiency. In practice, this drives measurable value for both telcos and enterprises, dramatically reducing handling time, repeated calls, and token consumption, while improving accuracy, latency, and overall customer satisfaction. With this flexibility, telcos can accelerate their AI adoption by utilizing the amAIz GenAI platform and agents via scalable and adaptable AI factories built to meet their specific needs.
Amdocs and NVIDIA – A partnership for AI telco excellence
Through Amdocs’ strategic alignment with NVIDIA and commitment to leveraging advanced AI technologies, we’re empowering telcos to accelerate AI adoption while achieving measurable business outcomes – reducing operational costs, improving customer satisfaction, and driving revenue growth through more efficient, intelligent operations. Together, our collaboration represents an effective force for AI excellence in our industry, ensuring telcos have the tools and technologies they need to lead toward a future where AI is seamlessly integrated into every aspect of the network and the customer experience.