0% found this document useful (0 votes)
3 views7 pages

An Optimal Routing Algorithm in Service Customized

This research article presents a novel service-customized 5G network architecture that integrates Software Defined Networking (SDN), Content-Centric Networking (CCN), and Big Data processing to enhance mobile data traffic management. An optimal routing algorithm is designed to minimize average response hops, demonstrating improved network performance through in-network caching under various conditions. Simulation results indicate significant benefits in network efficiency and resource allocation, particularly in handling user requests and optimizing cache strategies.

Uploaded by

Aman singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views7 pages

An Optimal Routing Algorithm in Service Customized

This research article presents a novel service-customized 5G network architecture that integrates Software Defined Networking (SDN), Content-Centric Networking (CCN), and Big Data processing to enhance mobile data traffic management. An optimal routing algorithm is designed to minimize average response hops, demonstrating improved network performance through in-network caching under various conditions. Simulation results indicate significant benefits in network efficiency and resource allocation, particularly in handling user requests and optimizing cache strategies.

Uploaded by

Aman singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Hindawi Publishing Corporation

Mobile Information Systems


Volume 2016, Article ID 6146435, 7 pages
http://dx.doi.org/10.1155/2016/6146435

Research Article
An Optimal Routing Algorithm in
Service Customized 5G Networks

Haipeng Yao,1 Chao Fang,1 Yiru Guo,2 and Chenglin Zhao3


1
State Key Lab of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 199,
No. 10, Xitucheng Road, Haidian District, Beijing 100876, China
2
Beijing University of Posts and Telecommunications, P.O. Box 199, No. 10, Xitucheng Road, Haidian District, Beijing 100876, China
3
Key Lab of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications,
P.O. Box 199, No. 10, Xitucheng Road, Haidian District, Beijing 100876, China

Correspondence should be addressed to Haipeng Yao; yaohaipeng@bupt.edu.cn

Received 5 November 2015; Accepted 21 December 2015

Academic Editor: Xin Wang

Copyright © 2016 Haipeng Yao et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

With the widespread use of Internet, the scale of mobile data traffic grows explosively, which makes 5G networks in cellular
networks become a growing concern. Recently, the ideas related to future network, for example, Software Defined Networking
(SDN), Content-Centric Networking (CCN), and Big Data, have drawn more and more attention. In this paper, we propose a
service-customized 5G network architecture by introducing the ideas of separation between control plane and data plane, in-
network caching, and Big Data processing and analysis to resolve the problems traditional cellular radio networks face. Moreover,
we design an optimal routing algorithm for this architecture, which can minimize average response hops in the network. Simulation
results reveal that, by introducing the cache, the network performance can be obviously improved in different network conditions
compared to the scenario without a cache. In addition, we explore the change of cache hit rate and average response hops under
different cache replacement policies, cache sizes, content popularity, and network topologies, respectively.

1. Introduction [5, 6], and Big Data to provide simply faster speeds and meet
the needs of new use cases, such as the Internet of Things as
Mobile communication is the fastest growing field in the well as broadcast-like services and lifeline communication in
telecommunications industry, where the cellular radio net- times of natural disaster.
work is the most successful mobile communication system. SDN is an emerging network architecture where network
A new mobile generation has appeared approximately every
control is decoupled from forwarding. This migration of
10 years since the first 1G system was introduced in 1982. The
control is formerly tightly bound in individual network
first 2G system was commercially deployed in 1992, and the
devices and now enables the underlying infrastructure to be
first 3G system appeared in 2001. 4G systems fully compliant
with IMT Advanced were first standardized in 2012. With abstracted for applications and network services, which can
the widespread use of Internet, mobile hosts overtake fixed treat the network as a logical or virtual entity. The main idea
ones, not only in terms of numbers but also in terms of traffic of SDN is to allow software developers to depend on network
load [1]. How to practically deal with the explosive growth in resources in the same easy way as they do on storage and
wireless traffic and meet the increasing mobile users’ needs computing resources. In SDN, the network intelligence is log-
becomes a growing concern in the current cellular networks ically centralized in software-based controllers, and network
due to the increasing network cost [2, 3]. Recently, 5G devices become simple packet forwarding devices that can be
networks have been designed by fully considering the ideas programmed via an open interface [4]. Based on SDN, the
related to future network, for example, Software Defined system can provide a global view of network and control the
Networking (SDN) [4], Content-Centric Networking (CCN) forwarding traffic according to the mobile users’ demands.
2 Mobile Information Systems

CCN is a receiver-driven, data-centric communication three main popular Least Frequency Used (LFU),
protocol [7, 8]. All communication in CCN is performed Least Recently Used Policy (LRU), and Random
using two distinct types of packets: Interest packet and Data (RND) cache replacement policies [12, 13] to evaluate
packet. Both types of packets carry a name, which uniquely the performance of the system.
identifies a piece of data that can be carried in one Data
packet. Besides, to receive data the users requested, each The rest of this paper is organized as follows. In Section 2,
CCN content router maintains three major data structures: the novel service-customized 5G network architecture is
a Content Store (CS) for temporary caching of received Data presented. In Section 3, the optimal energy-efficient routing
packets, a Pending Interest Table (PIT) to contain the name model in the proposed architecture is given. Simulation
of the Interest packet and a set of interfaces from which results are presented and discussed in Section 4. Finally, we
the matching Interests have been received, and a Forwarding conclude this study in Section 5.
Information Base (FIB) to forward the Interest. CCN has
recently emerged as one of the most promising architectures
for the diffusion of contents over the Internet. A major feature
2. Service-Customized 5G
of this novel networking paradigm is in-network caching [9, Network Architecture
10], which can cache content objects to shorten the distance
As shown in Figure 1, the service-customized 5G network
of user requests. When the content is sent in reply to user
architecture introduces the in-network cache in the network
requests, it can be cached by any CCN content router along
devices, such as base stations and routers, realizes the sep-
the way back to the request originators. With in-network
aration between control plane and data plane, and adds the
caching, CCN can provide low dissemination latency and
Big Data processing and analysis functions to the control
reduce network load, as requests no longer need to travel until
plane.
the content source but are typically served by a closer CCN
content router along the routing path [11]. From Figure 1, we can see that the introduced cache in
To cope with the explosive increase of mobile data the network devices can buffer the contents that mobile users
and make a timely response to the users’ requests and are interested in and place the contents near the users. The
network problems, more and more researchers have paid following same content request can be satisfied by the cache
more attention to Big Data. Big Data is a set of techniques and without transmitting to the source server. Moreover, the sepa-
technologies that require new forms of integration to uncover ration between control plane and data plane makes the system
large hidden values from large datasets that are diverse, have a global view of resources (e.g., network, compute, and
complex, and of a massive scale, which makes possible cache) and dynamically configure the underlying network
the centralized network control, and timely processing and equipment in time by using the online and off-line Big Data
analysis of massive traffic in wireless networks. processing and analysis platform.
In order to resolve the problems traditional cellular radio The workflow of the proposed architecture is as follows.
networks face, some advantages related to future network Initially, the controller keeps monitoring the network and
have been considered. In this paper, we propose a service- updating information in a fixed period. Therefore, it can
customized 5G network architecture by introducing the ideas obtain the number of users’ requests, the real-time loading
of separation between control plane and data plane, in- situations. When a mobile consumer requests a content, the
network caching, and Big Data processing and analysis and request with the content’s name is encapsulated and for-
design an optimal routing algorithm for this architecture. The warded to the network edge device. Then the controller uses
main contributions of this paper are as follows. the collected information to calculate the optimal routing
path with the minimum network cost among the providers
(i) We propose a novel service-customized 5G net- which cache the content the user requests. After that, it
work architecture, which fully considers the benefits updates the in-network cache status based on the number
brought by separation between control plane and data of users’ requests and cache replacement policies. Based
plane, in-network caching, and Big Data processing on the advantages of Big Data platform, the controller can
and analysis. timely obtain the optimal routing path and update the cache
(ii) We design an optimal routing algorithm and abstract status.
it as a optimal model in the proposed 5G network To satisfy QoS of a VoD application, the application
architecture, which can meet a mobile user’s request firstly tells the controller what kinds of and how many
with minimal network latency, and realize the load resources (e.g., network bandwidth, storage capacity) it needs
balance in the network. in a request packet, which may be constructed in some
(iii) Simulation results reveal that, by introducing the way. After receiving the request packet, the controller uses
cache, the network performance can be obviously the Big Data processing platform to analyze the resource
improved in different network conditions compared information contained in the packet and then automatically
to the scenario without a cache. In addition, we allocate resources according to the demands of the applica-
explore the change of cache hit rate and average tion. Finally, a virtual network is formed and the proposed
response hops under different cache replacement optimal routing algorithm model is used to achieve minimal
policies, cache sizes, content popularity, and network network latency under constrained conditions by monitoring
topologies, respectively. In the simulation, we use the dynamic storage and content status.
Mobile Information Systems 3

Controller

Router 3

Access network Router 1 Router 5 Access network


Router 2

··· ···

Router 4
Access network

···

Figure 1: Service-customized 5G network architecture.

3. Optimal Routing Algorithm Model the value of 1 if node 𝑖 caches a copy of element 𝑘, and 0
otherwise. 𝑦𝑖𝑗𝑘 takes the value of 1 if node 𝑖 downloads a copy
We model the network as a connected graph 𝐺 = (𝑉, 𝐸), of content object 𝑘 from node 𝑗, and 0 otherwise. Obviously,
where 𝑉 = (𝑉1 , 𝑉2 , . . . , 𝑉𝑁) is the set of content routers some methods (e.g., genetic algorithm) can be used to obtain
in the network, and 𝐸 ⊆ 𝑉 × 𝑉 is the set of network the optimal solution.
bidirectional links. Let 𝑂 = (𝑂1 , 𝑂2 , . . . , 𝑂𝑀) be the set of
content objects that can be available in the network. All of the
objects are initially distributed in the network servers, which 4. Simulation Results and Discussions
are directly connected to edge content routers [14]. For the In this section, we use computer simulations to evaluate the
sake of readability, the term “content router” and “node” will performance of the structure of the new architecture. We
be used interchangeably here. first describe the simulation settings and then present the
In this paper, our objective is to achieve minimal network simulation results and compare them.
latency by addressing the question of how each content router
with limited caching capacity caches contents in the network.
Therefore, the optimal routing problem can be formulated as 4.1. Simulation Settings
an integer linear problem (ILP) as follows:
4.1.1. Network Topologies. The simulation is carried out in
𝑁 𝑀 𝑁 the power-law and transit-stub topology, respectively. Power-
min:
𝑥,𝑦
∑ ∑ 𝑞𝑖𝑘 ∑𝑑𝑖𝑗𝑘 𝑦𝑖𝑗𝑘 law topology is generated by Inet topology generator [15]
𝑖=1 𝑘=1 𝑗=1 and includes 64 content routers, where 40 edge routers are
connected to the users and makes the users’ number 𝑇 = 40.
𝑁
Transit-stub topology is generated by using GT-ITM library
s.t.: ∑ 𝑦𝑖𝑗𝑘 = 1, ∀𝑖, 𝑘
𝑗=1
[16] and has 24 routers, where 15 edge routers are connected
(1) to the users and sets the users’ number 𝑇 = 15.
𝑦𝑖𝑗𝑘 ≤ 𝑥𝑗𝑘 , ∀𝑖, 𝑗, 𝑘
4.1.2. Input Data. In the simulation, there are 100 different
𝑀
𝑘 contents, and the total number of content objects is 2 × 106
∑ 𝑠 𝑥𝑖𝑘 ≤ 𝐶𝑖 , ∀𝑖 in the network. We assume each object has the same size, and
𝑘=1
the content popularity follows the Zipf distribution and the
𝑦𝑖𝑗𝑘 ∈ {0, 1} , 𝑥𝑖𝑘 ∈ {0, 1} , ∀𝑖, 𝑗, 𝑘, wide range of the skewness factor 𝛼 is 0.5–1.5 [11, 17, 18].

where 𝑞𝑖𝑘 is the request rate for object 𝑂𝑘 at node 𝑖, 𝑑𝑖𝑗𝑘 is 4.1.3. Cache Size. In the simulation, we abstract the cache
the distance consumed by node 𝑖 to request content object size for each CCN content router as a proportion that the
𝑂𝑘 from node 𝑗, 𝐶𝑖 is the maximum cache size at router 𝑖, cache size is defined as the relative size to the total amount
and 𝑠𝑘 is the size of the content object 𝑂𝑘 . Moreover, 𝑥𝑖𝑘 takes of different contents in the network. Given that the cache size
4 Mobile Information Systems

100 100

95
90
90
80

Cache hit rate (%)


Cache hit rate (%)

85

80 70

75
60
70
50
65

60 40
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Relative cache size (%) Relative cache size (%)

LFU LFU
LRU LRU
RND RND
(a) Power-law topology (b) Transit-stub topology

Figure 2: Cache hit rate versus cache size under different network topologies.

of the CCN router is small in realistic network [13, 19–23], we while that of LRU is worst. The reason is that RND policy fits
evaluate the network performances for each caching scheme the global network users’ behavior well by random replacing
when the cache memory size varies from 1% to 10% [23]. the contents in the cache. But for LRU, it replaces the content
in the cache with the recently accessed one, which increases
4.1.4. Comparative Policy. The widely used caching replace- the contention of each cache and further reduces the cache
ment policies in CCN is Least Frequency Used (LFU), Least hit rate of the system. LFU can achieve a high cache hit rate
Recently Used Policy (LRU), and Random (RND) cache by caching the frequently contents while its performance is
replacement policies. Therefore, we use three main popular worse than RND due to slowly catching up with the change
LFU, LRU, and RND cache replacement policies to evaluate of content popularity of each node.
the performance of the proposed architecture and designed Figure 3 shows the average response hops of each solution
routing policy in the simulation. with varying cache sizes under different network topologies
when content popularity is 0.8. From Figure 3, we can see
4.1.5. Performance Metrics. In the simulation, we evaluate that the proposed architecture can obviously reduce average
cache hit rate (CHR) and average response hops (ARH), response hops by introducing in-network cache. Moreover,
which are the two important metrics to measure network average response hops of each policy changes in the same
QoS. way as cache size increases in the power-law and transit-stub
topology, respectively. However, the average response hops
(i) CHR is the ratio of the amount of requested objects under power-law topology is worse than that under transit-
served by routers rather than the source servers in the stub topology because of the larger distance of the network
system to the total amount of requested contents. nodes. Moreover, the performance of LFU is best while that of
(ii) ARH is referred to as the average number of the LRU is worst. The reason is that each node using LFU policy
routers traversed by the response packets from the caches the contents with high access frequency, which makes
source servers or routers to the requesting mobile the contents that users are interested in available near them.
users. But for LRU, it leads to the frequent content replacement in
each node, which reduces the average response hops of the
4.2. Performance Evaluation Results. Figure 2 shows the system. Although LFU can achieve the best cache hit rate, it
cache hit rate of each solution with varying cache sizes under cannot make the interested contents cached near the users,
different network topologies when content popularity is 0.8. which obtains a worse average response hops compared with
From Figure 2, we can see that the cache hit rate of each LFU policy.
policy changes in the same way as cache size increases in the Figure 4 shows the cache hit rate of each solution with
power-law and transit-stub topology, respectively. However, varying content popularity under different network topolo-
the cache hit rate under power-law topology is better than gies when cache size is 5%. From Figure 4, we can see
that under transit-stub topology because the total cache size that the cache hit rate of each policy changes in the same
is much larger. Moreover, the performance of RND is best way as content popularity increases in the power-law and
Mobile Information Systems 5

10 4
9
3.5
8
Average response hops

Average response hops


7 3

6
2.5
5

4 2

3
1.5
2

1 1
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Relative cache size (%) Relative cache size (%)

No cache LRU No cache LRU


LFU RND LFU RND
(a) Power-law topology (b) Transit-stub topology

Figure 3: Average response hops versus cache size under different network topologies.

98 95
96 90
94
85
92
Cache hit rate (%)
Cache hit rate (%)

80
90
88 75
86 70
84
65
82
60
80
78 55
0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5
Content popularity Content popularity

LFU LFU
LRU LRU
RND RND
(a) Power-law topology (b) Transit-stub topology

Figure 4: Cache hit rate versus content popularity under different network topologies.

transit-stub topology, respectively. The reason is that the topologies when cache size is 5%. From Figure 5, we can see
increasing content popularity gradually reduces the number that the proposed architecture can obviously reduce average
of content types in the network. However, the cache hit rate response hops by introducing in-network cache. Moreover,
under power-law topology is better than that under transit- average response hops of each policy changes in the same
stub topology because the total cache size is much larger. way as content popularity increases in the power-law and
Moreover, the performance of RND is best while that of LRU transit-stub topology, respectively. However, the ache hit rate
is worst. The reason is similar to that of Figure 2, because the under power-law topology is worse than that under transit-
fact that the number of content types is reduced means the stub topology because of the larger distance of the network
cache size increases relatively. nodes. Moreover, the performance of LFU is best while that of
Figure 5 shows the average response hops of each solution LRU is worst. The reason is similar to that of Figure 3, which
with varying content popularity under different network has been mentioned in the previous paragraph.
6 Mobile Information Systems

10 4
9
3.5
8
3
Average response hops

Average response hops


6
2.5
5
4 2

3 1.5
2
1
1
0 0.5
0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5
Content popularity Content popularity

No cache LRU No cache LRU


LFU RND LFU RND
(a) Power-law topology (b) Transit-stub topology

Figure 5: Average response hops versus content popularity under different network topologies.

5. Conclusions and Future Work References


In this paper, by adopting the advantages of separation [1] G. Xylomenos, X. Vasilakos, C. Tsilopoulos, V. A. Siris, and G. C.
between control plane and data plane, in-network caching, Polyzos, “Caching and mobility support in a publish-subscribe
and Big Data processing and analysis, we propose a service- internet architecture,” IEEE Communications Magazine, vol. 50,
customized 5G network architecture to overcome the prob- no. 7, pp. 52–58, 2012.
lems current cellular radio networks face. Moreover, we [2] J. Araujo, F. Giroire, Y. Liu, R. Modrzejewski, and J. Moulierac,
design an optimal routing algorithm for this architecture, “Energy efficient content distribution,” in Proceedings of the
IEEE International Conference on Communications (ICC ’13), pp.
which can minimize average response hops in the network.
4233–4238, Budapest, Hungary, June 2013.
Simulation results reveal that, by introducing the cache, the
[3] V. Mathew, R. K. Sitaraman, and P. Shenoy, “Energy-aware load
network performance can be obviously improved in different
balancing in content delivery networks,” in Proceedings of the
network conditions compared to the scenario without a IEEE Conference on Computer Communications (INFOCOM
cache. In addition, we explore the change of cache hit rate ’12), pp. 954–962, IEEE, Orlando, Fla, USA, March 2012.
and average response hops under different cache replacement [4] B. A. A. Nunes, M. Mendonca, X.-N. Nguyen, K. Obraczka,
policies, cache sizes, content popularity, and network topolo- and T. Turletti, “A survey of software-defined networking:
gies, respectively. past, present, and future of programmable networks,” IEEE
With recent advances of wireless mobile communication Communications Surveys & Tutorials, vol. 16, no. 3, pp. 1617–
technologies and devices, more and more end users access the 1634, 2014.
Internet via mobile devices, such as smart phones and tablets. [5] C. Fang, F. R. Yu, T. Huang, J. Liu, and Y. Liu, “A survey
Therefore, we will study user mobility in the proposed model of green information-centric networking: research issues and
in the future. Moreover, it would be interesting to discuss how challenges,” IEEE Communications Surveys & Tutorials, vol. 17,
to find an online routing algorithm to minimize network cost no. 3, pp. 1455–1472, 2015.
in the future work. [6] C. Fang, F. R. Yu, T. Huang, J. Liu, and Y. Liu, “A survey of
energy-efficient caching in information-centric networking,”
IEEE Communications Magazine, vol. 52, no. 11, pp. 122–129,
Conflict of Interests 2014.
The authors declare that there is no conflict of interests [7] C. Yi, A. Afanasyev, I. Moiseenko, L. Wang, B. Zhang, and
L. Zhang, “A case for stateful forwarding plane,” Tech. Rep.,
regarding the publication of this paper.
University of Arizona, University of California, Los Angeles,
University of Memphis, 2012.
Acknowledgment [8] C. Fang, F. R. Yu, T. Huang, J. Liu, and Y. Liu, “A distributed
energy consumption optimization algorithm for content-
This work was supported by NSFC (61471056) and China centric networks via dual decomposition,” in Proceedings of the
Jiangsu Future Internet Research Fund (BY2013095-3-1, IEEE Global Communications Conference (GLOBECOM ’14), pp.
BY2013095-3-03). 1848–1853, Austin, Tex, USA, December 2014.
Mobile Information Systems 7

[9] I. Psaras, R. G. Clegg, R. Landa, W. K. Chai, and G. Pavlou, [23] N. Laoutaris, S. Syntila, and I. Stavrakakis, “Meta algorithms
“Modelling and evaluation of CCN-caching trees,” in NET- for hierarchical web caches,” in Proceedings of the 23rd IEEE
WORKING 2011: 10th International IFIP TC 6 Networking International Performance, Computing, and Communications
Conference, Valencia, Spain, May 9–13, 2011, Proceedings, Part Conference (IPCCC ’04), pp. 445–452, IEEE, Phoenix, Ariz,
I, vol. 6640 of Lecture Notes in Computer Science, pp. 78–91, USA, April 2004.
Springer, Berlin, Germany, 2011.
[10] G. Carofiglio, V. Gehlen, and D. Perino, “Experimental evalu-
ation of memory management in content-centric networking,”
in Proceedings of the IEEE International Conference on Commu-
nications (ICC ’11), Kyoto, Japan, June 2011.
[11] N. Choi, K. Guan, D. C. Kilper, and G. Atkinson, “In-network
caching effect on optimal energy consumption in content-
centric networking,” in Proceedings of the IEEE International
Conference on Communications (ICC ’12), pp. 2889–2894,
Ottawa, Canada, June 2012.
[12] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H.
Briggs, and R. L. Braynard, “Networking named content,” in
Proceedings of the ACM Conference on Emerging Networking
Experiments and Technologies (CoNEXT ’09), pp. 1–12, Rome,
Italy, December 2009.
[13] D. Rossi and G. Rossini, “Caching performance of content
centric networks under multi-path routing (and more),” Tech.
Rep., Telecom ParisTech, Paris, France, 2011.
[14] C. Fang, F. R. Yu, T. Huang, J. Liu, and Y. Liu, “An energy-
efficient distributed in-network caching scheme for green
content-centric networks,” Computer Networks, vol. 78, pp. 119–
129, 2015.
[15] H. Chang, R. Govindan, S. Jamin, S. J. Shenker, and W.
Willinger, “Towards capturing representative as-level internet
topologies,” Computer Networks, vol. 44, no. 6, pp. 737–755,
2004.
[16] E. W. Zegura, K. L. Calvert, and M. J. Donahoo, “A quantitative
comparison of graph-based models for internet topology,”
IEEE/ACM Transactions on Networking, vol. 5, no. 6, pp. 770–
783, 1997.
[17] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker, “Web
caching and Zipf-like distributions: evidence and implications,”
in Proceedings of the 18th IEEE Annual Joint Conference of the
IEEE Computer and Communications Societies (INFOCOM ’99),
vol. 1, pp. 126–134, IEEE, New York, NY, USA, March 1999.
[18] K. Katsaros, G. Xylomenos, and G. C. Polyzos, “MultiCache: an
incrementally deployable overlay architecture for information-
centric networking,” in Proceedings of the IEEE Conference on
Computer Communications Workshops (INFOCOM ’10), pp. 1–
5, IEEE, San Diego, Calif, USA, March 2010.
[19] Z. Li, G. Simon, and A. Gravey, “Caching policies for in-network
caching,” in Proceedings of the 20th International Conference on
Computer Communications and Networks (ICCCN ’11), Maui,
Hawaii, USA, July-August 2011.
[20] H. Xie, G. Shi, and P. Wang, “TECC: towards collaborative in-
network caching guided by traffic engineering,” in Proceedings
of the IEEE Conference on Computer Communications (INFO-
COM ’12), pp. 2546–2550, Orlando, Fla, USA, March 2012.
[21] J. Li, H. Wu, B. Liu et al., “Popularity-driven coordinated
caching in named data networking,” in Proceedings of the 8th
ACM/IEEE Symposium on Architectures for Networking and
Communications Systems (ANCS ’12), pp. 15–26, Marina del Rey,
Calif, USA, October 2012.
[22] J. Li, B. Liu, and H. Wu, “Energy-efficient in-network caching
for content-centric networking,” IEEE Communications Letters,
vol. 17, no. 4, pp. 797–800, 2013.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy