Task Offloading Optimization Mechanism Based On Deep Neural Network in Edge-Cloud Environment
Task Offloading Optimization Mechanism Based On Deep Neural Network in Edge-Cloud Environment
Task Offloading Optimization Mechanism Based On Deep Neural Network in Edge-Cloud Environment
Abstract
With the rise of edge computing technology and the development of intelligent mobile devices, task offloading in
the edge-cloud environment has become a research hotspot. Task offloading is also a key research issue in Mobile
CrowdSourcing (MCS), where crowd workers collect sensed data through smart devices they carry and offload
to edge-cloud servers or perform computing tasks locally. Current researches mainly focus on reducing resource
consumption in edge-cloud servers, but fails to consider the conflict between resource consumption and service
quality. Therefore, this paper considers the learning generation offloading strategy among multiple Deep Neural
Network(DNN), proposed a Deep Neural Network-based Task Offloading Optimization (DTOO) algorithm to obtain
an approximate optimal task offloading strategy in the edge-cloud servers to solve the conflict between resource
consumption and service quality. In addition, a stack-based offloading strategy is researched. The resource sorting
method allocates computing resources reasonably, thereby reducing the probability of task failure. Compared with
the existing algorithms, the DTOO algorithm could balance the conflict between resource consumption and service
quality in traditional edge-cloud applications on the premise of ensuring a higher task completion rate.
Keywords Edge-cloud, Task offloading, Mobile crowdsourcing, Deep neural network, Resource consumption, Service
quality
© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http://creativecommons.org/licenses/by/4.0/.
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 2 of 12
In a typical mobile crowdsourcing system, a complete edge-cloud servers is limited. If there are too many
cloud-based architecture consists of the cloud plat- tasks in the task queue, some time-sensitive tasks
form, task requesters, and crowd workers. First, the task may not be solved in time. Therefore, how to reason-
requester issues the task through the cloud platform. ably allocate the computing resources on the edge-
Then, select appropriate workers to assign tasks through cloud server is a key factor to improve the success
task assignment [12]. Crowd workers collect data rate of task allocation.
through their mobile devices and upload it to the cloud
platform. Finally, the platform will evaluate and update In response to the above challenges, this paper stud-
the worker’s reputation value [13–15] based on the qual- ies a task offloading optimization algorithm DTOO for
ity of the worker’s uploaded data [16]. With the emer- the MCS, which generates a near-optimal task offload-
gence of technologies such as intelligent driving [17], ing strategy and solves the conflict between resource
task requesters have higher and higher requirements for consumption and quality of service. A resource alloca-
real-time data [18, 19], and workers uploading data to tion scheme is designed to improve the success rate of
the cloud platform will generate large data delays [20], the task. The main contributions of this paper are sum-
and traditional centralized cloud platforms will not be marized as follows.
able to meet this requirement. The emergence of edge-
cloud technology has temporarily solved this problem. 1. This paper designs a task offloading algorithm
Edge-Cloud refers to the use of an edge-cloud server DTOO based on DNN, which can obtain an approxi-
that integrates network, computing, storage, and appli- mate excellent offloading strategy through learning
cation core capabilities on the side close to the mobile among multiple neural units, so as to solve the con-
device or data source [21] to provide the nearest com- flict between edge-cloud server resource consump-
puting service nearby. When workers use mobile devices tion and service quality.
to upload data, they can directly interact with the near- 2. This paper proposes a stack-based resource sort-
est edge node, which greatly reduces data transmission ing scheme, which matches different computing
latency [22]. In the edge-cloud environment, computing resources according to the timeliness level of tasks,
tasks are performed on a powerful edge-cloud server, thereby improving the success rate of tasks.
which has the advantages of easy installation and small 3. The proposed DNN-based task offloading scheme
size [23]. But their load capacity and computing power and stack-based task ranking mechanism are ana-
are still far inferior to cloud servers. Chen et al. [24] pro- lyzed and evaluated through comparison experi-
posed a game theory-based task offloading algorithm, ments on real datasets. The experimental results veri-
but this algorithm requires multiple interactions between fied the superiority of this scheme.
crowd workers and edge servers, which consumes a lot
of resources. Huang et al. [25] proposed a task offload- The rest of this paper is organized as follows. Section
ing and resource allocation scheme based on a deep II introduces the related works. Section III describes the
Q-network, but its feature of searching in tables is not DTOO algorithm and resource allocation scheme. Sec-
suitable for processing high-dimensional data. Therefore, tion IV presents the comparison experiments and the
the problem of optimizing the task offloading strategy in discussion of the experimental results. Finally, Section V
edge servers needs to be solved urgently [26]. The main presents the conclusion.
challenges for task offloading in MCS are as follows.
Related work
1. In the practical application of MCS, workers often In recent years, more and more attention has been paid
choose the nearest edge-cloud server to upload sen- to the research of task assignment [27, 28] based on
sory data. If there are a large number of workers near mobile crowdsourcing in the edge cloud environment,
the edge-cloud server and most workers choose to aiming at design an optimal task offloading strategy with
offload tasks to the edge-cloud server, the edge-cloud low latency, low energy consumption, and high service
server may suffer from excessive data processing quality. Many scholars have conducted in-depth research
capacity. Large and overloaded and leads to paralysis. on this and proposed feasibility studies.
Therefore, how to make reasonable task offloading
decisions is an important research content to prevent Edge computing
excessive load on edge-cloud servers. With the popularization of the IoT and the promotion
2. Although researchers have proposed many schemes of cloud services [29], edge computing has emerged as
to solve the task distribution problem among mul- a new computing paradigm. Edge computing refers to
tiple edge-cloud servers, the computing power of delegating data processing to the edge of the network
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 3 of 12
[30]. This mode can reduce request delay and network and decentralized coordination, Xu et al. [45] pro-
bandwidth while ensuring data security and privacy posed an online task offloading algorithm based on
[31–33]. The core part of edge computing is to migrate Lyapunov optimization and Gibbs sampling. Shu et al.
some or all of the computing tasks in cloud computing [46] designed an algorithm that supports multi-user
to the vicinity of mobile devices. This highly poten- task offloading, dividing tasks into subtasks and off-
tial way can solve some of the shortcomings of cloud loading them to edge servers to reduce the end-to-end
computing [34]. Zhao et al. [35] designed a new mobile task execution time. Mao et al. [47] studied an Energy
device data transmission scheme, introduced edge Harvesting (EH) technology to power mobile devices
computing in the cloud platform-centric architecture, through renewable energy. Based on Lyapunov, the fre-
and used edge nodes to assist data transmission to quency and transmit power of the CPU are optimized to
solve the problem of excessive bandwidth consump- reduce the execution delay of the task. Zhao et al. [48]
tion in traditional cloud platform solutions [36]. This optimized the offloading decision, radio resource allo-
scheme explores the bandwidth consumption of edge cation, and computing resource allocation, and trans-
nodes through the edge computing paradigm. Ren et al. formed the resource minimization problem into the
[37] explored the problem of joint communication Mixed Integer Nonlinear Programming (MINLP) prob-
technology and computing resource allocation, which lem. A Gini coefficient-based greedy heuristic (GCGH)
aimed at find an optimal solution to minimize latency was proposed to solve this problem. Although comput-
in the cloud and edge-cloud collaborative systems. ing offloading in edge computing is the core technology,
And esigned an offloading scheme based on distrib- how to allocate resources to improve the task comple-
uted computing, which can achieve excellent comput- tion rate should also be considered in practical crowd-
ing offloading ability and can make corresponding sourcing applications.
adjustments with the change of user scale [38]. Opti-
mized the problem of multi-user resource offloading Deep learning
of edge cloud in a multi-channel wireless interference As an emerging technology in machine learning algo-
environment. rithms, Deep Learning’s main purpose is to build and
simulate a neural network for analyzing and learn-
Task offloading based on edge cloud ing the human brain [49]. The essence of deep learn-
The problem of offloading computing tasks in edge com- ing is to perform hierarchical feature representation
puting is a research hotspot [39]. In the actual crowd- on data, and further abstract low-level features into
sourcing environment, task offloading will be affected high-level features through neural networks. DNN
by various external factors, such as the hardware per- composed of multi-layer perceptions have achieved
formance of the device, the network environment where major breakthroughs in the fields of image classifi-
the worker is located [40, 41], and the worker’s person- cation and recognition, natural language process-
alized choice [42]. This makes it particularly important ing, and a robot control. Mnih et al. [50] used deep
to formulate a reasonable and dynamically changing neural networks to develop a novel surrogate model
task offloading strategy according to the external envi- called a deep Q-network [51], which bridges the gap
ronment. Some existing works mainly study how to between high-level sensory input and decision-making
make task assignment decisions in an offline or online actions. Deep learning is also widely used in the field
state, and most of the research focuses on minimiz- of wireless communication, such as resource alloca-
ing task completion time and resource consumption tion problems [52], signal detection problems [53],
as the optimization goal. For example, Dinh et al. [43] data caching problems [54], etc. In recent years, some
considered two cases of whether the CPU frequency of scholars have used deep learning models to solve the
the edge server can be adjusted or not, and proposed task offloading problem in the edge cloud environ-
a linear relaxation-based method and an exhaustive ment. Huang et al. [55] proposed a deep reinforcement
search-based scheme to solve the two cases, respec- learning-based method to solve the task offloading and
tively. Obviously, the exhaustive approach consumes a resource allocation problem, with the aim of making
lot of computing resources. To get a balance between each user obtain a satisfactory task offloading decision
resource consumption and computational latency. Wu and resource allocation scheme. However, the search
et al. [44] proposed a task offloading algorithm based nature of deep Q-learning based on Q-table makes its
on Lyapunov, which reduces the resource consump- performance not outstanding when dealing with high-
tion of the device under the condition of satisfying the dimensional data.
delay constraint. Considering service heterogeneity, Existing optimization schemes do not take into
unknown system dynamics, spatial demand coupling, account the limitation of computational dimensions,
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 4 of 12
and general optimization algorithms cannot efficiently Table 1 Symbols and definitions
deal with the complexity of data in the actual crowd- Notation Description
sourcing environment, especially when faced with a
large number of crowd workers. The existing work C = {c1 , c2 , c3 , . . . , cn } Crowd task set
failed to consider the allocation of resources when opti- l = {lc1 , lc2 , lc3 , . . . lcn } Crowd tasks posted location set
mizing the task offloading scheme. Therefore, according Tsatrt Crowd task release time
to the above problems, this paper considers the balance Tend Crowd task deadline
between resource consumption and quality of service, T = {tc1 , tc2 , tc3 , . . . , tcn } Maximum allowable delay set for crowd
tasks
and the task completion rate. A DTOO algorithm is pro-
D = {dc1 , dc2 , dc1 , . . . , dcn } The amount of data contained set in
posed to specify an efficient offloading decision through
the crowd task
mutual learning among DNNs. In order to improve the
W = {w1 , w2 , w3 , . . . , wm } Crowd worker set
task completion rate and allocate resources according to
Wid Crowd worker id
the priority of tasks, a stack-based resource allocation
Sci wj Task offload policy
scheme is designed.
rlocal Local data processing rate
qlocal Consumption per bit of data processed
System design locally
In this section, first, the task offloading problem in the xedge Edge server data transfer rate
edge cloud environment is defined, then, the system redge The rate at which edge server data is
model of this paper is introduced, and finally, the algo- processed
rithm proposed in this paper is described in detail. The yedge Transmission energy consumption per
main symbol definitions are shown in Table 1. bit data of edge server
qedge Energy consumption per bit of data
processed by edge servers
Problem definition
Tlocal (ci , wj ) Local time consumption of worker wj
Definition 1 (Crowd Task): In MCS system, the crowd task ci
tasks are uploaded to the platform by the task requester, Elocal (ci , wj ) Local energy consumption of worker
and the crowd tasks are released by the platform, wj task ci
defined as C = {c1 , c2 , c3 , . . . , cn } .Each task also has Tedge (ci , wj ) The marginal time consumption of
its properties, where the task is published is defined as worker wj task ci
l = lc1 , lc2 , lc3 , . . . lcn . The time of task release is defined Eedge (ci , wj ) The marginal energy consumption of
worker wj task ci
as Tsatrt . The task deadline is defined as Tend . Therefore,
Wlocal The total consumption of local processing
the maximum allowable delay of a task can be defined as computing tasks
T = tc1 , tc2 , tc3 ,. . . , tcn . The data volume of the task is
Wedge The total consumption of edge servers
defined as D = dc1 , dc2 , dc1 , . . . , dcn .
processing computing tasks
Definition 2 (Crowd Work): Crowd workers can
collect data with their own mobile devices and upload
the data to the crowd platform. Crowd workers are
executed locally, and Sci wj = 1 , indicates that the task is
defined as W = {w1 , w2 , w3 , . . . , wm } . Each crowd
offloaded to the edge server for execution.
worker also has its attributes, the id of the worker is
Definition 4 (Local Computation): Model situations
defined as Wid .
where users choose to process computing tasks locally.
Definition 3 (Task Offload Policy): For each crowd
For computation tasks executed locally, the time con-
worker, he or she can choose to process computing tasks
sumption is defined as Eq. (2):
locally or offload computing tasks to edge servers for
processing. Therefore, considering the task offloading d ci
strategy as a binary problem, when workers choose to
Tlocal ci , wj = (2)
rlocal
process computing tasks locally, it is recorded as 0, and
where Tlocal ci , wj is the time consumption of worker
when they choose to offload tasks to edge servers for pro-
cessing, it is recorded as 1. Therefore, the task offloading wj task ci , and rlocal is the rate at which data is processed
strategy is defined as Eq. (1): locally. Energy consumption for local processing is
defined as Eq. (3):
0, local
S ci w j = (1)
1, offload Elocal ci , wj = dci × qlocal (3)
processed locally. Therefore, the joint offloading strategy stores them in the task stack, which corresponds to the
defines the total consumption of locally processed com- address stack in the edge server. When task ck joins, first
k−1
puting tasks as Eq. (4): determine whether the total delay
tcu of its previous
n m u=1
)] (
task will exceed the maximum allowable delay of task ck ,
∑ ∑[ )
(4)
( ) (
Wlocal = Tlocal ci , wj + Elocal ci , wj × 1 − Sci wj
i=1 j=1 and if it exceeds, it will be allocated to other idle
addresses.
Definition 5 (Edge Computing): Model the user’s choice
to offload computing tasks to edge servers. The set of edge
servers is defined as ES = {es1 , es2 , es3 , . . . , esz } , and the System model design
time consumption for computing tasks offloaded to edge In the edge cloud environment, the computing tasks
servers is defined as Eq. (5): of the central cloud are sunk to the edge of the net-
d ci d ci work, which greatly reduces network latency. At the
Tedge ci , wj =
xedge
+
redge
(5) same time, this paper uses mutual learning among
multiple DNNs to obtain an approximate optimal off-
where Tedge (ci , wj ) is the time consumption of worker loading strategy, which also ensures the service qual-
wj task ci , xedge is the data transmission rate of the edge ity under the premise of low resource consumption. A
server, and redge is the data processing rate of the edge stack-based sorting mechanism to reasonably allocate
server. The energy consumption of offloading computing resources is used to improve task completion rate. In
tasks to edge servers is defined as Eq. (6): practical crowd applications, each crowd worker will
have multiple jobs that need to be processed locally
Eedge ci , wj = dci × yedge + dci × qedge (6) or at the edge server, and the offloading decision is
represented by 0 or 1. Sci wj = 0 means that the task is
where Eedge (ci , wj ) is the energy consumption of worker executed locally. Sci wj = 1 indicates that the task is
wj task ci , yedge is the energy consumption of data trans- offloaded to the edge server for execution. The system
mission per bit, and qedge is the energy consumption of model is shown in Fig. 1. First, a DNN is used to gen-
edge server processing each bit of data. Therefore, the erate candidate unloading actions by taking the task
joint offloading strategy defines the total consumption of scale carried by crowd workers as input to the model.
edge server processing computing tasks as Eq. (7): Then, an offloading strategy that meets the optimization
n
m objective is selected as the output. For computing tasks
that are offloaded to edge servers for processing, the
Wedge = Tedge ci , wj + Eedge ci , wj ×S ci wj
i=1 j=1 tasks are sorted according to their priority. In the task
(7) stack, tasks with larger maximum allowable delays are
Then the total consumption of the system is defined as placed at the bottom of the stack, and tasks with lower
Eq. (8): delays are placed at the top of the stack. In the address
n m
stack, servers with more resources, that is, addresses
)] ( with higher idle levels, are placed at the top of the stack,
∑ ∑[ ( ) ( )
Wtotal = Tlocal ci , wj + Elocal ci , wj × 1 − Sci wj
i=1 j=1 and those with fewer resources are placed at the bot-
n m
∑ ∑[
(8) tom of the stack. The task stack and the address stack
( ) ( )]
+ Tedge ci , wj + Eedge ci , wj ×S ci wj are arranged in order so that the most urgent tasks can
i=1 j=1
be allocated addresses of more resources, which greatly
In short, to minimize the resource consumption of task improves the success rate of the tasks.
completion, the goal of this stage is to find an optimal Since there may be crowd workers at the boundary of
task offloading strategy to minimize Wtotal. the edge server service range in the candidate set and
Definition 6 (Resource Allocation): Model resource not recruited by the platform, these workers may have
allocation among edge servers. Due to the limited com- better utility. Therefore, before recruiting workers, the
puting power and load capacity of edge servers, and movement trajectories of the workers are predicted to
crowd tasks are also time-sensitive, edge servers need to accurately locate the partitions where the workers are
complete computing tasks within a certain period. located based on the workers’ historical trajectories and
Therefore, according to the maximum allowable delay of social networks. Then, assign all the recruited workers
the tasks, this paper sorts the tasks by priority and to edge servers.
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 6 of 12
DTOO algorithms
In this section, algorithms for generating offloading
decisions using DNNs are presented. For the above-
mentioned problem, we only need to find an optimal
unloading strategy Scw , so that Wtotal achieves the mini-
mum value. Since the size of set Scw is 2nm , finding an
optimal offloading strategy is NP-hard. Therefore, the
approximate optimal unloading policy function ∼ is
obtained by training the DNN, such that : S ← D , as
Eq. (9):
∼
(9)
( )
arg minΦ(S) = arg minWtotal (D, Sci wj ), i ∈ n;j ∈ m
Table 2 Experimental parameter settings The experiments in this paper are all implemented in
Variable Value
the Python environment. A laptop with Intel(R) Core
(TM) i7-10750H CPU and 16GB memory was used.
rlocal 1.50 × 107 bit/s
qlocal 6.60 × 108 J/bit
Performance evaluation and comparative test
xedge 1.25 × 108 bit/s
Figure 3 shows the convergence performance of the algo-
redge 8.38 × 108 bit/s
rithm under different learning rates. From Fig. 3(a) and
yedge 7.81 × 109 J/bit (b), it can be seen that when batch=64, the algorithm has
qedge 8.19 × 109 bit/s the best convergence effect. In Fig. 3(c), the abscissa is the
training step size, and the ordinate is the total resource
consumption. It can be seen from Fig. 3 that the learning
carried by workers is 1.50 × 107 bit/s . The consump- rate is too high or too low to achieve a good convergence
tion of mobile devices processing data is 6.6 × 108 J /bit . effect. When the learning rate is 0.01, the convergence
In addition, this paper sets the number of edge servers effect is better when 0.001. In Fig. 3(d), the abscissa is the
within the task scope to five. The detailed parameter set- training step size, and the ordinate is the gain rate. It can
tings are shown in Table 2.
be seen from the figure that the best effect is when the Fig. 4(b) that the convergence performance of the Adam
learning rate is 0.01. Therefore, the batch is set to 64 and optimizer is the best.
the learning rate is set to 0.01 in the experiments. In order to verify the superiority of the proposed algo-
The effect of the number of DNNs on the convergence rithm in terms of task completion rate, this paper con-
effect of the algorithm can be seen in Fig. 4(a). When ducts a comparison experiment with the RC [57] and LRU
there is only one DNN, the mutual learning between [58] algorithms and adds the traditional arrival time task
DNNs cannot be performed, so the algorithm cannot ranking (ATR) algorithm for comparison. As shown in
converge. When the number of DNNs is greater than 1, Fig. 5, the task completion rates of the four algorithms all
the gain rate increases as the number of DNNs increases, decrease as the number of tasks increases, because when
and the convergence requires fewer steps. Therefore, the number of tasks increases, the newly added tasks may
multiple DNNs can be selected for model training under not be allocated computing resources in time, resulting
the premise of hardware equipment. It can be seen in in task failure. It can be seen from Fig. 5 that the DTOO
algorithm proposed in this paper has the highest task through learning among multiple neural units. It aims
completion rate because in DTOO, tasks and edge server at solve the conflict between resource consumption
addresses are arranged in the stack according to their pri- and quality of service in practical MCS applications. To
orities so that tasks with higher priorities can obtain more improve the completion rate of tasks, this paper pro-
computing resources, thus improving the task completion posed a stack-based resource sorting method. The tasks
rate. It can be seen that the algorithm proposed in this are arranged in the task stack according to their prior-
paper is superior in terms of task completion rate. ity, and the server addresses are arranged in the address
In order to verify the superiority of the proposed stack according to the idle level. After the task unload-
algorithm in terms of resource consumption, this paper ing is completed, according to the priority of the task for
conducts comparison experiments with Deep Q-Net- task allocation, thereby improving the task completion
work [45], MUMTO [59], Greedy algorithm, adds only rate. Finally, performance tests and comparison experi-
local processing tasks and all tasks are offloaded to the ments are carried out on the research-based general
Edge Server. As shown in Fig. 6, it is clear that the total spatial crowdsourcing platform gMission dataset, and it
resource consumption of all algorithms increases with is verified that the algorithm proposed in this paper has
the number of tasks. If all tasks are offloaded to edge good performance in terms of balancing resource con-
servers for processing, high resource consumption will sumption and service quality as well as task completion
occur. The DTOO algorithm proposed in this paper rate. In the future work, the influence of workers’ pref-
is similar in utility to the Greedy algorithm, but since erences on task offload decisions will be considered, and
the Greedy algorithm will enumerate all the offload- the task offload strategy will be further optimized.
ing strategies, it will occupy a considerable amount of
system memory. Therefore, the algorithm proposed in
Abbreviations
this paper is superior in resource consumption. MCS Mobile CrowdSourcing
DNN Deep Neural Network
DTOO Deep Neural Network-based Task Offloading Optimizatio
IoT Internet of Things
Conclusion EH Energy Harvesting
This paper focuses on the task offloading problem of MINLP Mixed Integer Nonlinear Programming
MCS in an edge cloud environment. Aiming at the GCGH Gini Coefficient-based Greedy Heuristic
ATR Arrival Time task Rankin
problem of task offloading strategy selection, this paper RC tRend Caching
proposed a DTOO algorithm based on DNN, which LRU Least Recently Used
obtains an approximate optimal offloading strategy MUMTO Multi-User Multi-Task Optimizatio
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 11 of 12
23. Huang J, Gao H, Wan S et al (2023) Aoi-aware energy control and compu- 49. Chen Y, Gu W, Xu J, et al (2022) Dynamic task offloading for digital twin-
tation offloading for industrial IoT. Futur Gener Comput Syst 139:29–37 empowered mobile edge computing via deep reinforcement learning.
24. Chen X, Jiao L, Li W, Fu X (2015) Efficient multi-user computation offload- China Commun. https://doi.org/10.1002/dac.5154
ing for mobile-edge cloud computing. IEEE/ACM Trans Networking 50. Mnih V, Kavukcuoglu K, Silver D, Rusu A, Veness J, Bellemare M, Graves
24:2795–2808 A, Riedmiller M, Fidjeland A, Ostrovski G (2015) Human-level control
25. Huang L, Feng X, Zhang C, Qian L, Wu Y (2019) Deep reinforcement through deep reinforcement learning. Nature 518(7540):529–533
learning-based joint task offloading and bandwidth allocation for multi- 51. Huang J, Wan J, Lv B, Ye Q et al (2023) Joint computation offloading and
user mobile edge computing. Digit Commun Netw 5:10–17 resource allocation for edge-cloud collaboration in internet of vehicles
26. Bi R, Liu Q, Ren J, Tan G (2020) Utility aware offloading for mobile-edge via deep reinforcement learning. IEEE Syst J. https://doi.org/10.1109/
computing. Tsinghua Sci Technol 26:239–250 JSYST.2023.3249217
27. Zhang Q, Wang Y, Yin G, Tong X, Sai AMVV, Cai Z (2022) Two-stage bilateral 52. Xu Z, Wang Y, Tang J, Wang J, Gursoy MC (2017) A deep reinforcement
online priority assignment in spatio-temporal crowdsourcing. IEEE Trans learning based framework for power-efficient resource allocation in
Serv Comput 8:516–530 cloud rans. In: 2017 IEEE International Conference on Communications
28. Wang Y, Cai Z, Zhan ZH, Zhao B, Tong X, Qi L (2020) Walrasian equilibrium- (ICC). IEEE, Paris, p 1–6
based multiobjective optimization for task allocation in mobile crowd- 53. Ye H, Li G, Juang B (2017) Power of deep learning for channel estima-
sourcing. IEEE Trans Comput Soc Syst 7(4):1033–1046 tion and signal detection in ofdm systems. IEEE Wirel Commun Lett
29. Chen Y, Hu J, Zhao J, Min G (2023) Qos-aware computation offloading in 7(1):114–117
leo satellite edge computing for IoT: A game-theoretical approach. Chin J 54. He Z Yand Zhang, Yu F, Zhao N, Yin H, Leung V, Zhang Y (2017) Deep-
Electron. https://doi.org/10.1109/TMC.2022.3223119 reinforcement-learning-based optimization for cache-enabled oppor-
30. Shi W, Cao J, Zhang Q, Li Y, Xu L (2016) Edge computing: Vision and chal- tunistic interference alignment wireless networks. IEEE Trans Veh Technol
lenges. IEEE Internet Things J 3:637–646 66(11):10433–10445
31. Sun Z, Wang Y, Cai Z, Liu T, Tong X, Jiang N (2021) A two-stage privacy 55. Huang L, Feng X, Qian L, Wu Y (2018) Deep reinforcement learning-based
protection mechanism based on blockchain in mobile crowdsourcing. task offloading and resource allocation for mobile edge computing. In:
Int J Intell Syst (36-5). https://doi.org/10.1002/int.22371 International Conference on Machine Learning and Intelligent Communi-
32. Liu T, Wang Y, Li Y, Tong X (2020) Privacy protection based on cations. MLICOM, Hangzhou, p 33–42
stream cipher for spatio-temporal data in IoT. IEEE Internet Things J 56. gmission dataset. http://gmission.github.io/. Accessed 2022
7(9):7928–7940 57. Li S, Xu J, van der Schaar M, Li W (2016) Trend-aware video caching
33. Cai Z, Zheng X (2018) A private and efficient mechanism for data upload- through online learning. IEEE Trans Multimed 18(12):2503–2516
ing in smart cyber-physical systems. IEEE Trans Netw Sci Eng 7(2):766–775 58. Jin W, Li X, Yu Y, Wang Y (2013) Adaptive insertion and promotion
34. Wang T, Lu Y, Cao Z, Shu L, Zheng X, Liu A, Xie M (2019) When sensor- policies based on least recently used replacement. IEICE Trans Inf Syst
cloud meets mobile edge computing. Sensors 19(23):5324 96(1):124–128
35. Zhao W, Liu J, Guo H, Hara T (2018) Etc-IoT: Edge-node-assisted transmit- 59. Chen MH, Liang B, Dong M (2016) Joint offloading decision and resource
ting for the cloud-centric internet of things. IEEE Netw 32(3):101–107 allocation for multi-user multi-task mobile cloud. In: 2016 IEEE Interna-
36. Cai Z, Xiong Z, Xu H, Wang P, Li W, Pan Y (2021) Generative adversarial tional Conference on Communications (ICC). IEEE, Paris, p 1–6
networks: A survey toward private and secure applications. ACM Comput
Surv (CSUR) 54(6):1–38
37. Ren J, Yu G, He Y, Li GY (2019) Collaborative cloud and edge computing Publisher’s Note
for latency minimization. IEEE Trans Veh Technol 68(5):5031–5044 Springer Nature remains neutral with regard to jurisdictional claims in pub-
38. Wang W, Wang Y, Duan P, Liu T, Tong X, Cai Z (2022) A triple real-time lished maps and institutional affiliations.
trajectory privacy protection mechanism based on edge computing and
blockchain in mobile crowdsourcing. IEEE Trans Mob Comput 1–18
39. Xiang C, Zhang Z, Qu Y, Lu D, Fan X, Yang P, Wu F (2020) Edge computing-
empowered large-scale traffic data recovery leveraging low-rank theory.
IEEE Trans Netw Sci Eng 7(4):2205–2218
40. Xiang C, Li Y, Zhou Y, He S, Qu Y, Li Z, Gong L, Chen C (2022) A compara-
tive approach to resurrecting the market of mod vehicular crowdsensing.
In: Proc. IEEE Conf. Comput. Commun. IEEE, London, p 1–10
41. Xiang C, Yang P, Tian C, Zhang L, Lin H, Xiao F, Zhang M, Liu Y (2015) Carm:
Crowd-sensing accurate outdoor rss maps with error-prone smartphone
measurements. IEEE Trans Mob Comput 15(11):2669–2681
42. Wang Y, Gao Y, Li Y, Tong X (2020) A worker-selection incentive mecha-
nism for optimizing platform-centric mobile crowdsourcing systems.
Comput Netw 171(107):144
43. Dinh T, Tang J, La Q, Quek T (2017) Offloading in mobile edge computing:
Task allocation and computational frequency scaling. IEEE Trans Commun
65(8):3571–3584
44. Wu H, Sun Y, Wolter K (2018) Energy-efficient decision making for mobile
cloud offloading. IEEE Trans Cloud Comput 8(2):570–584
45. Xu J, Chen L, Zhou P (2018) Joint service caching and task offloading for
mobile edge computing in dense networks. In: IEEE INFOCOM 2018-IEEE
Conference on Computer Communications. IEEE, Honolulu, p 207–215
46. Cand ShuZ, Zhao Han Y, Min G, Duan H (2019) Multi-user offloading for
edge computing networks: A dependency-aware and latency-optimal
approach. IEEE Internet Things J 7(3):1678–1689
47. Mao Y, Zhang J, Letaief K (2016) Dynamic computation offloading for
mobile-edge computing with energy harvesting devices. IEEE J Sel Areas
Commun 34(12):3590–3605
48. Zhao P, Tian H, Qin C, Nie G (2017) Energy-saving offloading by jointly
allocating radio and computational resources for mobile edge comput-
ing. IEEE Access 5:11255–11268