Amdocs is leading the next wave of telecom AI innovation through a strategic collaboration with NVIDIA. By integrating advanced AI tools like the universal LLM NIM and the Data Flywheel into the amAIz Suite, we’re enabling service providers to scale smarter, faster, and more efficiently.
Amdocs is spearheading the next evolution of AI within the telecommunications and media sector with amAIz Suite. To achieve this, we are strategically collaborating with NVIDIA, a global leader in accelerated computing, to advance and integrate key NVIDIA AI technologies into our solutions. This collaboration underscores Amdocs' dedication to equipping service providers with cutting-edge capabilities, empowering them to fully harness the transformative potential of artificial intelligence and capitalize on the vast opportunities it presents. Our collaboration with NVIDIA spans several crucial areas, each designed to address specific challenges and unlock new possibilities within the telecom landscape:
Universal LLM NIM
The universal LLM NIM microservice offers a unified and streamlined approach to deploying LLMs, providing support for leading inference frameworks from NVIDIA and the community including NVIDIA TensorRT-LLM, vLLM, and SGLang. This flexibility gives service providers a consistent, simplified workflow for reliably deploying their choice of open LLMs and specialized variants using the inference backend best suited to model serving needs and infrastructure.
Key advantages of the universal LLM NIM microservice container include:
- Simplified Deployment: The microservice abstracts away much of the complexity involved in deploying LLMs, making it easier and faster to get them up and running.
- High-Performance Inference: The universal LLM NIMmicroservice is optimized for high-performance inference on NVIDIA accelerated infrastructure, ensuring low latency and high throughput.
- Broad Model Support: The microservice supports models from popular sources like Hugging Face and those in TensorRT-LLM formats, providing access to a vast and rapidly expanding ecosystem of LLMs.
- Optimized by Default: The microservice comes with smart defaults that provide optimized response latency and throughput without requiring manual configuration.
- Performance Tuning: For advanced users, the universal LLM NIM microservice offers simple options for performance tuning and enhancement, allowing for fine-grained control over inference parameters.
- Enterprise Support: Part of NVIDIA AI Enterprise, the universal LLM NIM microservice container with multiple inference backends is continuously managed and updated by NVIDIA.
Amdocs believes that NIM will be particularly valuable for telecom operators via amAIz to support a wide range of LLMs for diverse applications, such as:
- Care & Sales Agents
- Network Agents & Automations
- Content creation
- Custom Language translation
By endorsing the universal LLM NIM microservice, Amdocs aims to help service providers overcome the deployment hurdles associated with LLMs and unlock their full potential to drive innovation and efficiency.

The Data Flywheel: Driving Efficiency with Fine-Tuned LLMs
Amdocs also recognizes the critical importance of continuous improvement in Amdocs AI Agents performance and efficiency. To achieve this, we embrace the Data Flywheel for self-improving AI models using real-world data and feedback for continuous model tuning, evaluation, and deployment.
At the core of this approach is the ability to fine-tune smaller, more efficient models to achieve the accuracy of much larger models, significantly reducing computational costs and latency. This Data Flywheel methodology involves several key stages:
- Data Curation: The careful selection, cleaning, and preparation of data to ensure its quality and relevance for training AI models.
- Ongoing Experimentation: The continuous exploration of different model architectures, training techniques, and hyperparameters to identify the optimal configuration.
- Rigorous Evaluation: The thorough assessment of model performance using a variety of metrics and benchmarks to measure its accuracy, reliability, and effectiveness.
- Iterative Fine-tuning: The process of adjusting model parameters based on evaluation results to further improve its performance.
A crucial aspect of the Data Flywheel is leveraging signals from production traffic and direct user feedback. This real-world data provides valuable insights into how the model performs in actual use and helps identify areas for improvement.
Amdocs has operationalized the Data Flywheel methodology by implementing a sophisticated LLMOps (Large Language Model Operations) pipeline. This pipeline leverages NVIDIA NeMo microservices and NVIDIA NIM to automate and streamline the various stages of the Data Flywheel process, enabling efficient fine-tuning of LLMs. For example, in specific tasks, fine-tuning a smaller model using this pipeline delivered a substantial performance boost, with accuracy on the test set increasing from 0.74 for the base Llama 3.1 8b Instruct model to 0.83 for the LoRA-fine-tuned version, even with only 50 training examples. This demonstrates the power of fine-tuning to achieve high accuracy with smaller, more cost-effective models.
Furthermore, this LLMOps pipeline, built upon the NVIDIA AI blueprint for building data flywheels, is seamlessly integrated directly into Amdocs' existing amAIz platform's CI/CD (Continuous Integration/Continuous Delivery) pipeline. This tight integration enables powerful evaluation and crucial regression testing for any newly released LLMs that are being considered for use by the amAIz platform. This ensures that only the highest-quality and most reliable LLMs are deployed, and that any potential issues are identified and addressed early in the development cycle.
By implementing the Data Flywheel, leveraging NVIDIA's technologies, and focusing on efficient fine-tuning strategies, Amdocs amAIz Suite is establishing a continuous improvement loop for LLMs. This approach allows us to achieve enterprise-level accuracy with smaller, more cost-effective models, which are optimized for reduced inference latency and total cost of ownership (TCO), making it ideal for scaling agentic applications. This ensures that our AI solutions are both cutting-edge and economically viable, and that they remain at the forefront of innovation and deliver maximum value to our customers.
amAIz for Sovereign AI Factories
Amdocs amAIz GenAI platform and amAIz agents are designed to seamlessly integrate with AI factories by leveraging technologies of high-performance infrastructure provides such as Dell and HP and NVIDIA’s AI Enterprise software , including NIM and NeMo. amAIz Agents can autonomously manage customer care, sales, and network tasks enabling telecom-native operations that revolutionize customer engagement and operational efficiency. This deployment drives measurable value for telcos and enterprises, dramatically reducing handling time, repeated calls, and token consumption, while improving accuracy, latency, and overall customer satisfaction. With this flexibility, telcos can accelerate their AI adoption by utilizing amAIz GenAI Platform and Agents via scalable and adaptable AI factories built to meet their specific needs.
Amdocs and NVIDIA - A Partnership for AI Excellence in Telecom
Through our strategic alignment with NVIDIA and our commitment to leveraging these advanced AI technologies – AI Factory, the universal LLM NIM container, and the Data Flywheel – Amdocs is accelerating the development and deployment of truly innovative AI solutions. We are empowering service providers to not only adapt to but also thrive in the rapidly evolving digital landscape, driving new levels of efficiency, customer engagement, and revenue growth. The collaboration between Amdocs and NVIDIA represents a powerful force for AI excellence in the telecommunications industry, paving the way for a future where AI is seamlessly integrated into every aspect of the network and the customer experience.