MCX512A-ACAT Mellanox Connectx 5 Dual Port 10 25gbe Sfp28 EN Adapter Card PCIe 3.0 X8
Λεπτομέρειες:
| Μάρκα: | Mellanox |
| Αριθμό μοντέλου: | MCX512A-ACAT |
| Έγγραφο: | connectx-5-en-card.pdf |
Πληρωμής & Αποστολής Όροι:
| Ποσότητα παραγγελίας min: | 1 τεμ |
|---|---|
| Τιμή: | Negotiate |
| Συσκευασία λεπτομέρειες: | Εξωτερικό κουτί |
| Χρόνος παράδοσης: | Βασισμένο στο αποθεματικό |
| Όροι πληρωμής: | T/T |
| Δυνατότητα προσφοράς: | Προμήθεια ανά έργο/παρτίδα |
|
Λεπτομερής ενημέρωση |
|||
| Εφαρμογή: | Υπηρέτης | Τύπος διεπαφής:: | Δίκτυο |
|---|---|---|---|
| λιμάνια: | Διπλός | Μέγιστη ταχύτητα: | 25GBE |
| Τύπος σύνδεσης: | SFP28 | Τύπος: | Ενσύρματο |
| Κατάσταση: | Νέο και πρωτότυπο | Μοντέλο: | MCX512A-ACAT |
| Ονομα: | Κάρτα δικτύου Mellanox MCX512A-ACAT ConnectX-5 EN Κάρτα προσαρμογέα 10/25 GbE Dual-Port SFP28 PCIe 3 | Λέξη-κλειδί: | Κάρτα δικτύου Mellanox |
Περιγραφή προϊόντων
Dual-port SFP28 25GbE Ethernet adapter card — delivering up to 25Gb/s per port, ultra-low latency, and advanced application offloads. Ideal for cloud, Web 2.0, storage, AI, and virtualization platforms requiring high bandwidth with exceptional CPU efficiency.
The NVIDIA ConnectX-5 EN MCX512A-ACAT is a dual-port 25GbE Ethernet adapter card designed for data centers that demand high throughput, low latency, and efficient server utilization. Built on the ConnectX-5 architecture, this adapter supports 25GbE, 10GbE, and 1GbE speeds, providing seamless migration from 10GbE to 25GbE infrastructure. With ultra-low latency, high message rate, and PCIe 3.0 x8 host interface, the MCX512A-ACAT delivers industry-leading performance for virtualized and bare-metal environments. Key capabilities include RoCE (RDMA over Converged Ethernet), SR-IOV virtualization with up to 512 Virtual Functions, ASAP2 accelerated switching and packet processing for vSwitch/vRouter offloads, NVMe over Fabric offloads, T10-DIF Signature Handover, comprehensive overlay network offloads (VXLAN, NVGRE, GENEVE), and UEFI support. This adapter is available in a low-profile PCIe form factor and is ideal for top-of-rack server connectivity.
Two SFP28 ports supporting 25GbE, 10GbE, and 1GbE speeds. Backward compatible with existing 10GbE infrastructure.
Sub-microsecond latency and high message rate for latency-sensitive applications like HFT and NVMe-oF.
Low-latency RDMA services over Layer 2 and Layer 3 networks for storage and compute workloads.
Hardware offload of Open vSwitch (OvS) and vRouter data plane, achieving wire-speed performance while reducing CPU load.
Hardware-accelerated NVMe-oF target offloads enabling efficient NVMe storage access with near-zero CPU intervention.
Up to 512 Virtual Functions (VFs) and 8 Physical Functions per port, with guaranteed QoS and VM isolation.
Hardware encapsulation and de-encapsulation for VXLAN, NVGRE, GENEVE, MPLS, and NSH tunnels.
Flexible parser and match-action tables enabling hardware offloads for current and future protocols.
NC-SI over MCTP, BMC interface, PLDM for monitoring and firmware update, PXE and UEFI remote boot.
The ConnectX-5 EN ASIC delivers record-setting performance with advanced acceleration engines. Key technological innovations include:
- PeerDirect (GPUDirect) – Eliminates unnecessary PCIe data copies between GPU and CPU, accelerating HPC, AI, and machine learning workloads.
- Adaptive Routing on Reliable Transport – Enables out-of-order RDMA and adaptive routing for optimized fabric utilization.
- Tag Matching and Rendezvous Offloads – Hardware offload of MPI tag matching and rendezvous protocol, reducing CPU overhead in HPC clusters.
- Burst Buffer Offloads – Hardware acceleration for background checkpointing in large-scale simulations and ML training.
- Embedded PCIe Switch – Supports up to 8 bifurcations, enabling host chaining and elimination of backend switches in storage racks.
- On-Demand Paging (ODP) – Registration-free RDMA memory access, simplifying application development.
- Extended Reliable Connected (XRC) and Dynamically Connected Transport (DCT) – Scales RDMA to tens of thousands of nodes.
- T10-DIF Signature Handover – Hardware-based data integrity protection for storage workloads at wire speed.
High-density virtualization, overlay networks, and vSwitch offloads reduce CPU utilization while maintaining wire-speed 25GbE performance.
NVMe-oF target offloads, T10-DIF, and RoCE enable high-performance block storage with sub-microsecond latency.
PeerDirect GPUDirect and adaptive routing accelerate distributed training workloads at 25GbE.
ASAP2 vSwitch offloads and service chaining enable efficient Network Function Virtualization.
Seamlessly upgrade from 10GbE to 25GbE while maintaining backward compatibility with existing switches and cables.
SR-IOV with up to 512 VFs enables dense VM deployments with guaranteed performance isolation.
The MCX512A-ACAT is compatible with a wide range of operating systems: RHEL/CentOS, Ubuntu, Windows Server, FreeBSD, VMware ESXi, and Citrix XenServer. It supports standard 25GbE SFP28 optics, passive DAC cables, active optical cables (AOC), and breakout configurations. The adapter integrates seamlessly with NVIDIA Spectrum switches and any standards-based 10GbE/25GbE infrastructure. Software support includes OFED (OpenFabrics Enterprise Distribution), DPDK, and WinOF-2 for Windows. UEFI support enables modern server boot environments.
| Category | Specification |
|---|---|
| Model | MCX512A-ACAT |
| Form Factor | Low-profile PCIe add-in card. Ships with tall bracket mounted, short bracket included. |
| Ports | 2x SFP28 (25GbE / 10GbE / 1GbE) |
| Supported Speeds | 25GbE, 10GbE, 1GbE |
| Host Interface | PCIe 3.0 x8 (compatible with x16, x4, x2, x1; auto-negotiated) |
| Message Rate | Up to 200 million messages per second (Mpps) |
| Latency | Sub-microsecond (typical) |
| Virtualization | SR-IOV: up to 512 Virtual Functions, 8 Physical Functions per port |
| RoCE Support | Yes – RDMA over Converged Ethernet (RoCE) |
| Overlay Offloads | VXLAN, NVGRE, GENEVE, MPLS, NSH hardware encapsulation and de-encapsulation |
| vSwitch/vRouter Offloads | ASAP2 – Open vSwitch (OvS) and vRouter data plane offload with flexible match-action tables |
| Storage Offloads | NVMe-oF target offloads, T10-DIF Signature Handover, SRP, iSER, NFS RDMA, SMB Direct |
| Enhanced Features | Tag matching, rendezvous offload, adaptive routing, burst buffer offload, embedded PCIe switch, ODP, XRC, DCT |
| CPU Offloads | TCP/UDP stateless offloads, LSO/LRO, checksum offload, RSS/TSS, HDS, VLAN/MPLS tag insertion/stripping |
| Management Interfaces | NC-SI over MCTP (SMBus/PCIe), BMC interface, PLDM (monitoring and firmware update), SDN eSwitch management, SPI, JTAG |
| Remote Boot | PXE, UEFI, iSCSI remote boot |
| UEFI Support | Yes – UEFI enabled (x86 and Arm platforms) |
| Power Consumption | Not publicly specified – please confirm before ordering |
| Operating Temperature | 0°C to 55°C (typical) |
| Standards | IEEE 802.3by (25GbE), 802.3ae (10GbE), 802.3az EEE, 802.1Qbb PFC, 802.1Qaz ETS, 802.1Qau QCN, 1588v2, PCIe Gen 3.0 |
| RoHS | Compliant |
| OPN (Ordering Part Number) | Ports | Max Speed | Interface | Host Interface | Key Feature |
|---|---|---|---|---|---|
| MCX512A-ACAT | 2 | 25GbE | SFP28 | PCIe 3.0 x8 | Dual-port 25GbE, UEFI enabled, RoCE, ASAP2 |
| MCX512A-ADAT | 2 | 25GbE | SFP28 | PCIe 3.0 x8 | ConnectX-5 Ex enhanced version |
| MCX512F-ACAT | 2 | 25GbE | SFP28 | PCIe 3.0 x16 | Enhanced host management |
| MCX516A-CCAT | 2 | 100GbE | QSFP28 | PCIe 3.0 x16 | Dual-port 100GbE for spine connectivity |
| MCX516A-CDAT | 2 | 100GbE | QSFP28 | PCIe 4.0 x16 | ConnectX-5 Ex 100GbE with PCIe 4.0 |
Future-proof your data center with 2.5x the bandwidth of 10GbE while maintaining backward compatibility.
200 Mpps enables the highest packet processing density for vSwitch, NFV, and latency-sensitive applications.
NVMe-oF, T10-DIF, ASAP2, and RoCE offloads dramatically reduce CPU utilization and improve application performance.
Hong Kong Starsurge offers competitive pricing, warranty support, and fast worldwide delivery.
Hong Kong Starsurge provides end-to-end support for NVIDIA/Mellanox adapters, including compatibility verification, firmware updates, and technical troubleshooting. Standard warranty aligns with NVIDIA's limited hardware warranty (1 year return-and-repair). Extended support options are available upon request. Our team can assist with driver installation, performance tuning, RoCE configuration, and integration into existing server, storage, and network environments.
| Category | Supported Options |
|---|---|
| Operating Systems | RHEL/CentOS 7/8/9, Ubuntu 18.04+, Windows Server 2016/2019/2022, FreeBSD 12+, VMware ESXi 6.7/7.0/8.0, Citrix XenServer |
| Switches | NVIDIA Spectrum SN3000/SN3420/SN3700 series, Cisco Nexus 3000/9000, Arista 7000 series, Juniper QFX series, any standards-based 10/25GbE switch |
| Cables and Optics (25GbE) | SFP28 passive DAC (up to 5m), SFP28 AOC, 25GBASE-SR (850nm, up to 100m), 25GBASE-LR (1310nm, up to 10km) |
| Cables and Optics (10GbE) | SFP+ passive DAC, SFP+ AOC, 10GBASE-SR, 10GBASE-LR |
| Management Protocols | NC-SI, MCTP over PCIe/SMBus, PLDM for monitoring and firmware update, SDN eSwitch management |
- Confirm server has an available PCIe x8 (or larger) slot – Gen 3.0 or higher.
- Determine required cable type: passive DAC (short distance), active optical (medium distance), or optical transceivers (long distance) for 25GbE operation.
- Verify operating system driver availability from NVIDIA/Mellanox official site (latest OFED or inbox drivers).
- Ensure your switch supports 25GbE SFP28 ports (most modern top-of-rack switches do).
- For RoCE deployments, confirm switch support for DCB (PFC, ETS, ECN) and congestion notification.
- For NVMe-oF target offloads, verify your storage software stack compatibility.
- If upgrading from 10GbE, confirm existing SFP+ optics can be used at 10GbE mode on this adapter.
- For UEFI boot, verify server firmware compatibility.
Dual-port 100GbE adapter for spine connectivity and high-bandwidth uplinks.
48x 25GbE + 12x 100GbE top-of-rack switch for leaf/spine fabrics.
Passive copper direct-attach cables for 25GbE connections up to 5 meters.
ConnectX-5 Ex enhanced version for additional performance optimizations.
- RoCE Deployment Guide for ConnectX-5 Series
- 25GbE Migration: Best Practices from 10GbE to 25GbE
- ASAP2 Open vSwitch Offload Configuration Guide
- NVMe over Fabric with ConnectX-5 Best Practices
- SR-IOV Configuration on VMware ESXi with Mellanox Adapters
Hong Kong Starsurge Group Co., Limited is a technology-driven provider of network hardware, IT services, and system integration since 2008. Serving government, healthcare, manufacturing, finance, education, and enterprise clients worldwide. We deliver switches, NICs, wireless solutions, IoT systems, and custom software with multilingual support and global delivery. With a customer-first approach, Starsurge ensures reliable quality, responsive service, and tailored network infrastructure solutions.







