• <strike id="fdgpu"><input id="fdgpu"></input></strike>
    <label id="fdgpu"></label>
    <s id="fdgpu"><code id="fdgpu"></code></s>

  • <label id="fdgpu"></label>
  • <span id="fdgpu"><u id="fdgpu"></u></span>

    <s id="fdgpu"><sub id="fdgpu"></sub></s>
     首頁(yè) > 新聞 > 國內 >

    《華為數據中心3.0架構白皮書(shū)(英文)》

    2014-07-24 15:25:28   作者:   來(lái)源:CTI論壇   評論:0  點(diǎn)擊:


      High Throughput Computing Data Center Architecture

      Abstract

      In the last few decades, data center(DC)technologies have kept evolving from DC 1.0(tightly-coupled silos)to DC 2.0(computer virtualization)to enhance data processing capability. Emerging      big data analysis based business raises highly-diversified and time-varied demand for DCs.  Due to  the limitations on throughput, resource utilization, manageability and energy efficiency, current DC 2.0 shows its incompetence to provide higher throughput and seamless integration of heterogeneous  resources for different big data applications. By rethinking the demand for big data applications,  Huawei proposes a high throughput computing data center architecture(HTC-DC)。Based on resource disaggregation and interface-unified interconnects, HTC-DC is enabled with PB-level data processing capability, intelligent manageability, high scalability and high energy efficiency. With   competitive features, HTC-DC can be a promising candidate for DC3.0.

      Contents

    • Era of Big Data: New Data Center Architecture in Need

      1、Needs on Big Data Processing

      2、DC Evolution: Limitations and Strategies

      3、Huawei’s Vision on Future DC

    • DC3.0: Huawei HTC-DC

      1、HTC-DC Overview

      2、Key Features

    • Summary

      ERA OF BIG DATA: NEW DATA CENTER ARCHITECTURE IN NEED

      Needs on Big Data Processing

      During the past few years, applications which are based on big data analysis have emerged, enriching human life with more real-time and intelligent interactions. Such applications have proven themselves to become the next wave of mainstream of online services. As the era of big data approaches, higher and higher demand on data processing capability has been raised. Being the major   facilities to support highly varied big data processing tasks, future data centers(DCs)are  expected to meet the following big data requirements(Figure 1):

    • PB/s-level data processing capability ensuring aggregated high-throughput computing, storage and networking;
    • Adaptability to highly-varied run-time resource demands;
    • Continuous availability providing 24x7 large-scaled service coverage, and supporting high-concurrency access;
    • Rapid deployment allowing quick deployment and resource configuration for emerging applications.

      DC Evolution: Limitations and Strategies

      DC technologies in the last decade have been evolved(Figure 2)from DC 1.0(with tightly-coupled silos)to current DC 2.0(with computer virtualization)。 Although data processing capability of DCs have been significantly enhanced, due to the limitations on throughput, resource utilization, manageability and energy efficiency, current DC 2.0 shows its incompetence to meet the demands of the future:


    Figure 2.DC Evolution

      - Throughput : Compared with technological improvement in computational capability of processors, improvement in I/O access performance has long been lagged behind. With the fact that computing within conventional DC architecture largely involves data movement between storage and CPU/memory via I/O ports,  it is challenging for current DC architecture to provide PB-level high throughput for big data applications. The problem of I/O gap is resulted from low-speed characteristics of conventional transmission and storage mediums, and also from inefficient architecture design and data access mechanisms.

      To meet the requirement of future high throughput data processing capability, adopting new  transmission technology(e.g. optical interconnects)and new storage medium can be feasible solutions. But a more fundamental approach is to re-design DC architecture as well as data access mechanisms for computing. If data access in computing process can avoid using conventional I/O mechanism, but use ultra-high-bandwidth network to serve as the new I/O functionality, DC throughput can be significantly improved.

    分享到: 收藏

    專(zhuān)題

    亚洲精品网站在线观看不卡无广告,国产a不卡片精品免费观看,欧美亚洲一区二区三区在线,国产一区二区三区日韩 大洼县| 武安市| 永城市| 乐平市| 祥云县| 重庆市| 长宁区| 京山县| 海盐县| 栖霞市| 邹城市| 诏安县| 松滋市| 敦化市| 汉沽区| 科尔| 西畴县| 余庆县| 贵定县| 宁海县| 永寿县| 商城县| 遂溪县| 滦南县| 斗六市| 曲水县| 阳西县| 山阳县| 邯郸市| 新疆| 林州市| 昔阳县| 邮箱| 台北县| 枞阳县| 望江县| 轮台县| 南郑县| 孟村| 来凤县| 汾阳市| http://444 http://444 http://444 http://444 http://444 http://444