Network Pipelining in Direct Connect Architectures
Matthew Williams, Rockport
Direct Connect networks have several advantages over traditional networks with centralized switching and offer unique opportunities to deploy computational resources within every network element.
In Direct Connect architectures, endpoints not only contain network interfaces, they also include elements that form the distributed network architecture itself. The network interface and network functions are collocated with Network Cards. The hardware resources within these Network Cards provide opportunities for a pipeline of computational resources to be deployed across the network. These resources are available both within servers and storage devices in the network and can be used to offload various parasitic workload support functions from the server. As examples, within a server, data compression is a powerful function that can be offloaded to the local Network Card and data analysis and indexing functions can be easily deployed within storage devices.
During this presentation, we will explore a detailed example of a Direct Connect network that enables network pipelining to offload significant offload from high performance server clusters.
Matthew has a B.Sc. in Electrical Engineering with First Class Honours from Queen's University, Kingston, Canada and is a registered P.Eng.
Matthew has 24 years of technical leadership and engineering experience, 12 years as CTO of successful network technology companies and has 21 issued US patents. He is an expert strategist, analyst and visionary who has delivered on transformational product concepts and obtained buy-in at the highest levels of Fortune 50 companies. Matt is an insightful and energetic communicator who enjoys product evangelization and inspiring global business and technical audiences.