Outline

  • Abstract
  • I. Introduction
  • II. Background
  • III. Polp Architecture Overview
  • IV. Memory Balancing
  • V. Traffic Balancing
  • VI. Performance Evaluation
  • VII. Conclusions and Future Work

رئوس مطالب

  • چکیده
  • مقدمه
  • 1. مقدمه
  • 2. پس زمینه
  • الف) جستجوی IP مبتنی بر ترای
  • ب) خط لوله های با حافظه متوازن
  • ج) موتورهای موازی جستجوی IP
  • 3. مرور معماری POLP
  • 4. متوازن کردن حافظه
  • الف) تقسیم بندی ترای
  • ب) نگاشت ساب ترای به خط لوله
  • ج) نگاشت گره به سطح
  • د) لغو کردن- فعال کردن در خط لوله
  • 5. متوازن کردن ترافیک
  • الف) نهان سازی پیشوند خط لوله ای
  • ب) نگاشت مجدد پویای ساب ترای به خط لوله
  • 6. ارزیابی عملکرد
  • الف) متوازن کردن حافظه در میان خط لوله ها و در میان سطوح
  • ب) اثربخشی نهان سازی پیشوند و نگاشت مجدد پویا
  • ج) عملکرد کلی
  • 7. نتیجه گیری و پژوهش های آتی

Abstract

Continuous growth in network link rates poses a strong demand on high speed IP lookup engines. While Ternary Content Addressable Memory (TCAM) based solutions serve most of today’s high-end routers, they do not scale well for the next-generation. On the other hand, pipelined SRAM- based algorithmic solutions become attractive. Intuitively multiple pipelines can be utilized in parallel to have a multiplicative effect on the throughput. However, several challenges must be addressed for such solutions to realize high throughput. First, the memory distribution across different stages of each pipeline as well as across different pipelines must be balanced. Second, the traffic on various pipelines should be balanced. In this paper, we propose a parallel SRAM-based multi- pipeline architecture for terabit IP lookup. To balance the memory requirement over the stages, a two-level mapping scheme is presented. By trie partitioning and subtrie-to-pipeline mapping, we ensure that each pipeline contains approximately equal number of trie nodes. Then, within each pipeline, a fine-grained node-to-stage mapping is used to achieve evenly distributed memory across the stages. To balance the traffic on different pipelines, both pipelined prefix caching and dynamic subtrie-to-pipeline remapping are employed. Simulation using real-life data shows that the proposed architecture with 8 pipelines can store a core routing table with over 200 K unique routing prefixes using 3.5 MB of memory. It achieves a throughput of up to 3.2 billion packets per second, i.e. 1 Tbps for minimum size (40 bytes) packets.


Conclusions

This paper proposed a parallel SRAM-based multi-pipeline architecture for terabit trie-based IP lookup. A two-level mapping scheme was proposed to balance the memory requirement among pipelines and across stages. We designed the pipelined prefix caches and proposed an exchange-based dynamic subtrie-to-pipeline remapping algorithm to balance the traffic among multiple pipelines. The proposed architecture with 8 pipelines can store a core routing table with over 200K unique routing prefixes using 3.5 MB of memory, and can achieve a high throughput of up to 3.2 billion packets per second, i.e. 1 Tbps for minimum size (40 bytes) packets. We plan to study the traffic distribution in real life routers, which has a large effect on the cache performance. Future work also includes applying the proposed architecture for multi-dimensional packet classification.

دانلود ترجمه تخصصی این مقاله دانلود رایگان فایل pdf انگلیسی