NCI 7.5 Deep Dive – Part Three: Flow CNI
- Naser Ebdah
- Dec 25, 2025
- 2 min read
Modern enterprise applications no longer run exclusively on virtual machines or Kubernetes. Most environments today operate a mix of both, and ensuring these workloads can communicate securely, scale consistently, and remain centrally managed is a growing challenge.
In Part Three of our NCI 7.5 deep-dive series, we focus on Flow Container Network Interface (Flow CNI). Flow CNI delivers unified networking for virtual machines and Kubernetes pods by extending Nutanix VPC constructs directly into Kubernetes clusters. This enables true hybrid application architectures where VMs and containers share the same networking and operational model.
Flow CNI is designed specifically for NKP-managed Kubernetes clusters, providing a Kubernetes-native way to eliminate networking silos and simplify hybrid operations on Nutanix.
What Is Flow CNI
Flow CNI is a Kubernetes networking plug-in that integrates Nutanix Flow virtual networking with Kubernetes clusters created and managed by Nutanix Kubernetes Platform (NKP). It uses overlay-based networking with Geneve encapsulation and native Kubernetes integration to allow:
Kubernetes pods and virtual machines to coexist within the same VPC
Consistent networking and security policies across compute types
Centralized visibility and control through Prism Central
Important: Flow CNI is supported only for NKP-managed Kubernetes clusters.
Key Capabilities
Flow CNI delivers enterprise-grade networking features for Kubernetes environments, including:
Overlay networking for pods using Geneve encapsulation
Pod-to-service communication using cluster IP addresses and load balancing.
east-west and north-south traffic within and across nodes
External access using SNAT, egress IPs, and egress service
Namespace-to-VPC mapping, enabling multiple VPCs within a single Kubernetes cluster
Federation of VM and Kubernetes control planes across clusters
Shared VPCs for hybrid VM and Kubernetes deployments which enable service discovery and access between VMs and Kubernetes services.
Deployment Models
Flow CNI supports two primary deployment patterns.
1) Kubernetes-Only Environment
A default VPC is created for Kubernetes pods
No VM networking is associated with the VPC
2) Federated VM and Kubernetes Environment
Kubernetes clusters are associated with VPCs that already include VMs
Pods and VMs communicate over the same VPC networks
Prerequisites at a Glance
Before deploying Flow CNI, ensure the following requirements are met:
NKP-managed Kubernetes clusters only
Required Nutanix software versions:
NKP 2.15 or later
AOS 7.5 or later
AHV 11.0 or later
Prism Central pc.7.5 or later
Network Controller 7.0.0 or later
Flow deployed in Integrated with Prism Central mode (Flow Standalone mode is currently not supported)
Helm installed and Flow CNI Helm package accessible
Non-overlapping CIDRs for pods, services, and VPCs
Kubernetes cluster onboarded to an XL Prism Central instance
Summary
Flow Container Network Interface bridges one of the most common gaps in enterprise platforms: consistent networking across VMs and Kubernetes. By extending VPC constructs into NKP-managed clusters, Flow CNI simplifies operations, enables hybrid architectures, and provides a strong foundation for application modernization on Nutanix.

