学术报告---“Big Data” and Its Impact on Server and High Speed Link Design

发布日期:2012-06-16浏览量:2914 发布人:icis

Date:  10:00a.m. @ June 18th, 2012

Venue: Rm# 311, Main Building of IMETU(微电子所新所311房间)

 

Xingchao (Chuck) Yuan received his B.S. degree in Electronic Engineering from Nanjing Institute of Technology (now Southeast University), Nanjing, China, in 1982. He received both his M.S. and Ph.D. degrees in Electrical Engineering from Syracuse University, Syracuse, New York, in 1983 and 1987, respectively. After receiving his Ph.D., Dr. Yuan was at the Thayer School of Engineering at Dartmouth College; first as a Postdoctoral fellow, and later as a Research Assistant Professor from 1987 to 1990.


 According to a 2011 McKinsey’s research report, “big data” is the next frontier for innovation, competition, and productivity. Enormous amount of value could be captured: for example, $300B/year for US healthcare, $350B for Europe’s public sector administration, $600B/year consumer surplus, and finally ~2 million jobs created in US alone.

This presentation consists of two parts. In the first part, we will fly at “10,000” meters by answering the following questions:

  • What is the “big data”?
  • What kind of computing technologies and infrastructures does it require?
    • What type of challenges one needs to overcome?
  • What does it mean for designing servers and particularly high speed links?

After reviewing the modern internet architecture, we identify three major (hardware design) challenges that one must overcome in order to enable the “big data” applications:

  • Power “wall”:
    • 100’s megawatts are consumed by a single datacenter
  • Interconnect “wall”
  • 100’s thousands of servers need to be interconnected at high speed
  • Virtualization and multicore demand more memory capacity and bandwidth
  • Memory (DRAM/Flash) density scaling is slowing and ending in a few years
  • Memory and storage “wall”

By examining a few examples, we explore how various technology and techniques could be used to address these challenges.  Specifically, we will review and compare several server architectures from Intel, IBM, and SeaMicro (AMD). We also show a few examples how 3D packaging, photonics, and new memory technology could be deployed to ease the scaling pain. Through these examples, the importance of interconnect is emphasized.

In the second part of the presentation, we come down to earth by describing how one could “scale” the interconnect wall. That is, we dive deeply into designing the high speed memory and chip to chip interconnects, or high speed links. In particular, we will present a high level design and modeling methodology for achieving aggressive design targets. The statistical link modeling methodology is presented. This is followed by presenting a methodology for modeling power supply noise induced jitter.  We illustrate the effectiveness of the discussed methodologies by presenting two next generation memory system design examples:

  • Low power: 3.2Gbps mobile memory
  • High performance: 20Gbps graphics memory

We explain the special characteristics and requirements and describe the techniques to achieve the low power and high performance.