Task Offloading Optimization Mechanism Based On Deep Neural Network in Edge-Cloud Environment

Download as pdf or txt
Download as pdf or txt
You are on page 1of 12

Meng 

et al. Journal of Cloud Computing (2023) 12:76 Journal of Cloud Computing:


https://doi.org/10.1186/s13677-023-00450-6
Advances, Systems and Applications

RESEARCH Open Access

Task offloading optimization mechanism


based on deep neural network in edge‑cloud
environment
Lingkang Meng1, Yingjie Wang1*, Haipeng Wang2, Xiangrong Tong1, Zice Sun1 and Zhipeng Cai3 

Abstract 
With the rise of edge computing technology and the development of intelligent mobile devices, task offloading in
the edge-cloud environment has become a research hotspot. Task offloading is also a key research issue in Mobile
CrowdSourcing (MCS), where crowd workers collect sensed data through smart devices they carry and offload
to edge-cloud servers or perform computing tasks locally. Current researches mainly focus on reducing resource
consumption in edge-cloud servers, but fails to consider the conflict between resource consumption and service
quality. Therefore, this paper considers the learning generation offloading strategy among multiple Deep Neural
Network(DNN), proposed a Deep Neural Network-based Task Offloading Optimization (DTOO) algorithm to obtain
an approximate optimal task offloading strategy in the edge-cloud servers to solve the conflict between resource
consumption and service quality. In addition, a stack-based offloading strategy is researched. The resource sorting
method allocates computing resources reasonably, thereby reducing the probability of task failure. Compared with
the existing algorithms, the DTOO algorithm could balance the conflict between resource consumption and service
quality in traditional edge-cloud applications on the premise of ensuring a higher task completion rate.
Keywords  Edge-cloud, Task offloading, Mobile crowdsourcing, Deep neural network, Resource consumption, Service
quality

Introduction platforms. MCS is a process in which crowd workers form


With the rapid development of Internet of Things (IoT) an interactive perception network by carrying mobile
[1] and 5G technology [2], edge-cloud has been integrated devices to a designated location for information collection
into daily applications, MCS, as a new mode of perception and crowdsourcing platforms. The crowdsourcing platform
network, data collection [3] and information service, has publishes tasks and recruits crowd workers to complete the
become an indispensable part of today’s society [4]. MCS is tasks [5], which provides many conveniences to people’s
a process in which crowd workers form an interactive per- lives, such as collecting information, analyzing data [6], and
ception network by carrying mobile devices to a designated sharing knowledge [7], so it has received extensive atten-
location for information collection and crowdsourcing tion in various fields. Academics at Zurich University [8]
designed an environmental monitoring model that uses a
smartphone carrying a sensor to detect ozone levels. The
*Correspondence: Smart City project in Serbia [9] uses sensors provided by
Yingjie Wang Libelium [10] on public transport equipment to monitor
towangyingjie@163.com
1
School of Computer and Control Engineering, Yantai University, Yantai,
air quality. In addition, the famous Waze company also
China provides commercial map services for people based on the
2
3
Institute of Information Fusion, Naval Aviation University, Yantai, China MCS model. Mobile crowdsourcing has become a research
Department of Computer Science, Georgia State University, Atlanta, USA
hotspot in Infocom, Ubicomp, Percom, and Mobicom [11].

© The Author(s) 2023. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
licence, visit http://​creat​iveco​mmons.​org/​licen​ses/​by/4.​0/.
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 2 of 12

In a typical mobile crowdsourcing system, a complete edge-cloud servers is limited. If there are too many
cloud-based architecture consists of the cloud plat- tasks in the task queue, some time-sensitive tasks
form, task requesters, and crowd workers. First, the task may not be solved in time. Therefore, how to reason-
requester issues the task through the cloud platform. ably allocate the computing resources on the edge-
Then, select appropriate workers to assign tasks through cloud server is a key factor to improve the success
task assignment [12]. Crowd workers collect data rate of task allocation.
through their mobile devices and upload it to the cloud
platform. Finally, the platform will evaluate and update In response to the above challenges, this paper stud-
the worker’s reputation value [13–15] based on the qual- ies a task offloading optimization algorithm DTOO for
ity of the worker’s uploaded data [16]. With the emer- the MCS, which generates a near-optimal task offload-
gence of technologies such as intelligent driving [17], ing strategy and solves the conflict between resource
task requesters have higher and higher requirements for consumption and quality of service. A resource alloca-
real-time data [18, 19], and workers uploading data to tion scheme is designed to improve the success rate of
the cloud platform will generate large data delays [20], the task. The main contributions of this paper are sum-
and traditional centralized cloud platforms will not be marized as follows.
able to meet this requirement. The emergence of edge-
cloud technology has temporarily solved this problem. 1. This paper designs a task offloading algorithm
Edge-Cloud refers to the use of an edge-cloud server DTOO based on DNN, which can obtain an approxi-
that integrates network, computing, storage, and appli- mate excellent offloading strategy through learning
cation core capabilities on the side close to the mobile among multiple neural units, so as to solve the con-
device or data source [21] to provide the nearest com- flict between edge-cloud server resource consump-
puting service nearby. When workers use mobile devices tion and service quality.
to upload data, they can directly interact with the near- 2. This paper proposes a stack-based resource sort-
est edge node, which greatly reduces data transmission ing scheme, which matches different computing
latency [22]. In the edge-cloud environment, computing resources according to the timeliness level of tasks,
tasks are performed on a powerful edge-cloud server, thereby improving the success rate of tasks.
which has the advantages of easy installation and small 3. The proposed DNN-based task offloading scheme
size [23]. But their load capacity and computing power and stack-based task ranking mechanism are ana-
are still far inferior to cloud servers. Chen et al. [24] pro- lyzed and evaluated through comparison experi-
posed a game theory-based task offloading algorithm, ments on real datasets. The experimental results veri-
but this algorithm requires multiple interactions between fied the superiority of this scheme.
crowd workers and edge servers, which consumes a lot
of resources. Huang et  al. [25] proposed a task offload- The rest of this paper is organized as follows. Section
ing and resource allocation scheme based on a deep II introduces the related works. Section III describes the
Q-network, but its feature of searching in tables is not DTOO algorithm and resource allocation scheme. Sec-
suitable for processing high-dimensional data. Therefore, tion IV presents the comparison experiments and the
the problem of optimizing the task offloading strategy in discussion of the experimental results. Finally, Section V
edge servers needs to be solved urgently [26]. The main presents the conclusion.
challenges for task offloading in MCS are as follows.
Related work
1. In the practical application of MCS, workers often In recent years, more and more attention has been paid
choose the nearest edge-cloud server to upload sen- to the research of task assignment [27, 28] based on
sory data. If there are a large number of workers near mobile crowdsourcing in the edge cloud environment,
the edge-cloud server and most workers choose to aiming at design an optimal task offloading strategy with
offload tasks to the edge-cloud server, the edge-cloud low latency, low energy consumption, and high service
server may suffer from excessive data processing quality. Many scholars have conducted in-depth research
capacity. Large and overloaded and leads to paralysis. on this and proposed feasibility studies.
Therefore, how to make reasonable task offloading
decisions is an important research content to prevent Edge computing
excessive load on edge-cloud servers. With the popularization of the IoT and the promotion
2. Although researchers have proposed many schemes of cloud services [29], edge computing has emerged as
to solve the task distribution problem among mul- a new computing paradigm. Edge computing refers to
tiple edge-cloud servers, the computing power of delegating data processing to the edge of the network
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 3 of 12

[30]. This mode can reduce request delay and network and decentralized coordination, Xu et  al. [45] pro-
bandwidth while ensuring data security and privacy posed an online task offloading algorithm based on
[31–33]. The core part of edge computing is to migrate Lyapunov optimization and Gibbs sampling. Shu et  al.
some or all of the computing tasks in cloud computing [46] designed an algorithm that supports multi-user
to the vicinity of mobile devices. This highly poten- task offloading, dividing tasks into subtasks and off-
tial way can solve some of the shortcomings of cloud loading them to edge servers to reduce the end-to-end
computing [34]. Zhao et al. [35] designed a new mobile task execution time. Mao et  al. [47] studied an Energy
device data transmission scheme, introduced edge Harvesting (EH) technology to power mobile devices
computing in the cloud platform-centric architecture, through renewable energy. Based on Lyapunov, the fre-
and used edge nodes to assist data transmission to quency and transmit power of the CPU are optimized to
solve the problem of excessive bandwidth consump- reduce the execution delay of the task. Zhao et  al. [48]
tion in traditional cloud platform solutions [36]. This optimized the offloading decision, radio resource allo-
scheme explores the bandwidth consumption of edge cation, and computing resource allocation, and trans-
nodes through the edge computing paradigm. Ren et al. formed the resource minimization problem into the
[37] explored the problem of joint communication Mixed Integer Nonlinear Programming (MINLP) prob-
technology and computing resource allocation, which lem. A Gini coefficient-based greedy heuristic (GCGH)
aimed at find an optimal solution to minimize latency was proposed to solve this problem. Although comput-
in the cloud and edge-cloud collaborative systems. ing offloading in edge computing is the core technology,
And esigned an offloading scheme based on distrib- how to allocate resources to improve the task comple-
uted computing, which can achieve excellent comput- tion rate should also be considered in practical crowd-
ing offloading ability and can make corresponding sourcing applications.
adjustments with the change of user scale [38]. Opti-
mized the problem of multi-user resource offloading Deep learning
of edge cloud in a multi-channel wireless interference As an emerging technology in machine learning algo-
environment. rithms, Deep Learning’s main purpose is to build and
simulate a neural network for analyzing and learn-
Task offloading based on edge cloud ing the human brain [49]. The essence of deep learn-
The problem of offloading computing tasks in edge com- ing is to perform hierarchical feature representation
puting is a research hotspot [39]. In the actual crowd- on data, and further abstract low-level features into
sourcing environment, task offloading will be affected high-level features through neural networks. DNN
by various external factors, such as the hardware per- composed of multi-layer perceptions have achieved
formance of the device, the network environment where major breakthroughs in the fields of image classifi-
the worker is located [40, 41], and the worker’s person- cation and recognition, natural language process-
alized choice [42]. This makes it particularly important ing, and a robot control. Mnih et  al. [50] used deep
to formulate a reasonable and dynamically changing neural networks to develop a novel surrogate model
task offloading strategy according to the external envi- called a deep Q-network [51], which bridges the gap
ronment. Some existing works mainly study how to between high-level sensory input and decision-making
make task assignment decisions in an offline or online actions. Deep learning is also widely used in the field
state, and most of the research focuses on minimiz- of wireless communication, such as resource alloca-
ing task completion time and resource consumption tion problems [52], signal detection problems [53],
as the optimization goal. For example, Dinh et  al. [43] data caching problems [54], etc. In recent years, some
considered two cases of whether the CPU frequency of scholars have used deep learning models to solve the
the edge server can be adjusted or not, and proposed task offloading problem in the edge cloud environ-
a linear relaxation-based method and an exhaustive ment. Huang et al. [55] proposed a deep reinforcement
search-based scheme to solve the two cases, respec- learning-based method to solve the task offloading and
tively. Obviously, the exhaustive approach consumes a resource allocation problem, with the aim of making
lot of computing resources. To get a balance between each user obtain a satisfactory task offloading decision
resource consumption and computational latency. Wu and resource allocation scheme. However, the search
et  al. [44] proposed a task offloading algorithm based nature of deep Q-learning based on Q-table makes its
on Lyapunov, which reduces the resource consump- performance not outstanding when dealing with high-
tion of the device under the condition of satisfying the dimensional data.
delay constraint. Considering service heterogeneity, Existing optimization schemes do not take into
unknown system dynamics, spatial demand coupling, account the limitation of computational dimensions,
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 4 of 12

and general optimization algorithms cannot efficiently Table 1  Symbols and definitions
deal with the complexity of data in the actual crowd- Notation Description
sourcing environment, especially when faced with a
large number of crowd workers. The existing work C = {c1 , c2 , c3 , . . . , cn } Crowd task set
failed to consider the allocation of resources when opti- l = {lc1 , lc2 , lc3 , . . . lcn } Crowd tasks posted location set
mizing the task offloading scheme. Therefore, according Tsatrt Crowd task release time
to the above problems, this paper considers the balance Tend Crowd task deadline
between resource consumption and quality of service, T = {tc1 , tc2 , tc3 , . . . , tcn } Maximum allowable delay set for crowd
tasks
and the task completion rate. A DTOO algorithm is pro-
D = {dc1 , dc2 , dc1 , . . . , dcn } The amount of data contained set in
posed to specify an efficient offloading decision through
the crowd task
mutual learning among DNNs. In order to improve the
W = {w1 , w2 , w3 , . . . , wm } Crowd worker set
task completion rate and allocate resources according to
Wid Crowd worker id
the priority of tasks, a stack-based resource allocation
Sci wj Task offload policy
scheme is designed.
rlocal Local data processing rate
qlocal Consumption per bit of data processed
System design locally
In this section, first, the task offloading problem in the xedge Edge server data transfer rate
edge cloud environment is defined, then, the system redge The rate at which edge server data is
model of this paper is introduced, and finally, the algo- processed
rithm proposed in this paper is described in detail. The yedge Transmission energy consumption per
main symbol definitions are shown in Table 1. bit data of edge server
qedge Energy consumption per bit of data
processed by edge servers
Problem definition
Tlocal (ci , wj ) Local time consumption of worker wj
Definition 1 (Crowd Task): In MCS system, the crowd task ci
tasks are uploaded to the platform by the task requester, Elocal (ci , wj ) Local energy consumption of worker
and the crowd tasks are released by the platform, wj task ci
defined as C = {c1 , c2 , c3 , . . . , cn } .Each task also has Tedge (ci , wj ) The marginal time consumption of
its properties, where the task is published is defined as worker wj task ci
l = lc1 , lc2 , lc3 , . . . lcn  . The time of task release is defined Eedge (ci , wj ) The marginal energy consumption of
worker wj task ci
as Tsatrt . The task deadline is defined as Tend . Therefore,
Wlocal The total consumption of local processing
the maximum allowable delay of a task can be defined as computing tasks
T = tc1 , tc2 , tc3 ,. . . , tcn  . The data volume of the task is

Wedge The total consumption of edge servers
defined as D = dc1 , dc2 , dc1 , . . . , dcn .

processing computing tasks
Definition 2 (Crowd Work): Crowd workers can
collect data with their own mobile devices and upload
the data to the crowd platform. Crowd workers are
executed locally, and Sci wj = 1 , indicates that the task is
defined as W = {w1 , w2 , w3 , . . . , wm }  . Each crowd
offloaded to the edge server for execution.
worker also has its attributes, the id of the worker is
Definition 4 (Local Computation): Model situations
defined as Wid .
where users choose to process computing tasks locally.
Definition 3 (Task Offload Policy): For each crowd
For computation tasks executed locally, the time con-
worker, he or she can choose to process computing tasks
sumption is defined as Eq. (2):
locally or offload computing tasks to edge servers for
processing. Therefore, considering the task offloading   d ci
strategy as a binary problem, when workers choose to
Tlocal ci , wj = (2)
rlocal
process computing tasks locally, it is recorded as 0, and
where Tlocal ci , wj is the time consumption of worker
 
when they choose to offload tasks to edge servers for pro-
cessing, it is recorded as 1. Therefore, the task offloading wj task ci , and rlocal is the rate at which data is processed
strategy is defined as Eq. (1): locally. Energy consumption for local processing is
 defined as Eq. (3):
0, local
S ci w j = (1)
 
1, offload Elocal ci , wj = dci × qlocal (3)

where Elocal ci , wj is the energy consumption of worker


 
where Sci wj represents worker wj choice of offloading
strategy for task ci . Sci wj = 0 , indicates that the task is wj taskci , and ql ocal is the consumption per bit of data
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 5 of 12

processed locally. Therefore, the joint offloading strategy stores them in the task stack, which corresponds to the
defines the total consumption of locally processed com- address stack in the edge server. When task ck joins, first
k−1
puting tasks as Eq. (4): determine whether the total delay

tcu of its previous
n m u=1
)] (
task will exceed the maximum allowable delay of task ck  ,
∑ ∑[ )
(4)
( ) (
Wlocal = Tlocal ci , wj + Elocal ci , wj × 1 − Sci wj
i=1 j=1 and if it exceeds, it will be allocated to other idle
addresses.
Definition 5 (Edge Computing): Model the user’s choice
to offload computing tasks to edge servers. The set of edge
servers is defined as ES = {es1 , es2 , es3 , . . . , esz } , and the System model design
time consumption for computing tasks offloaded to edge In the edge cloud environment, the computing tasks
servers is defined as Eq. (5): of the central cloud are sunk to the edge of the net-
  d ci d ci work, which greatly reduces network latency. At the
Tedge ci , wj =
xedge
+
redge
(5) same time, this paper uses mutual learning among
multiple DNNs to obtain an approximate optimal off-
where Tedge (ci , wj ) is the time consumption of worker loading strategy, which also ensures the service qual-
wj task ci , xedge is the data transmission rate of the edge ity under the premise of low resource consumption. A
server, and redge is the data processing rate of the edge stack-based sorting mechanism to reasonably allocate
server. The energy consumption of offloading computing resources is used to improve task completion rate. In
tasks to edge servers is defined as Eq. (6): practical crowd applications, each crowd worker will
  have multiple jobs that need to be processed locally
Eedge ci , wj = dci × yedge + dci × qedge (6) or at the edge server, and the offloading decision is
represented by 0 or 1. Sci wj = 0 means that the task is
where Eedge (ci , wj ) is the energy consumption of worker executed locally. Sci wj = 1 indicates that the task is
wj task ci , yedge is the energy consumption of data trans- offloaded to the edge server for execution. The system
mission per bit, and qedge is the energy consumption of model is shown in Fig.  1. First, a DNN is used to gen-
edge server processing each bit of data. Therefore, the erate candidate unloading actions by taking the task
joint offloading strategy defines the total consumption of scale carried by crowd workers as input to the model.
edge server processing computing tasks as Eq. (7): Then, an offloading strategy that meets the optimization
n 
 m objective is selected as the output. For computing tasks
that are offloaded to edge servers for processing, the
    
Wedge = Tedge ci , wj + Eedge ci , wj ×S ci wj
i=1 j=1 tasks are sorted according to their priority. In the task
(7) stack, tasks with larger maximum allowable delays are
Then the total consumption of the system is defined as placed at the bottom of the stack, and tasks with lower
Eq. (8): delays are placed at the top of the stack. In the address
n m
stack, servers with more resources, that is, addresses
)] ( with higher idle levels, are placed at the top of the stack,
∑ ∑[ ( ) ( )
Wtotal = Tlocal ci , wj + Elocal ci , wj × 1 − Sci wj
i=1 j=1 and those with fewer resources are placed at the bot-
n m
∑ ∑[
(8) tom of the stack. The task stack and the address stack
( ) ( )]
+ Tedge ci , wj + Eedge ci , wj ×S ci wj are arranged in order so that the most urgent tasks can
i=1 j=1
be allocated addresses of more resources, which greatly
In short, to minimize the resource consumption of task improves the success rate of the tasks.
completion, the goal of this stage is to find an optimal Since there may be crowd workers at the boundary of
task offloading strategy to minimize Wtotal. the edge server service range in the candidate set and
Definition 6 (Resource Allocation): Model resource not recruited by the platform, these workers may have
allocation among edge servers. Due to the limited com- better utility. Therefore, before recruiting workers, the
puting power and load capacity of edge servers, and movement trajectories of the workers are predicted to
crowd tasks are also time-sensitive, edge servers need to accurately locate the partitions where the workers are
complete computing tasks within a certain period. located based on the workers’ historical trajectories and
Therefore, according to the maximum allowable delay of social networks. Then, assign all the recruited workers
the tasks, this paper sorts the tasks by priority and to edge servers.
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 6 of 12

Fig. 1  System Model Design

DTOO algorithms
In this section, algorithms for generating offloading
decisions using DNNs are presented. For the above-
mentioned problem, we only need to find an optimal
unloading strategy Scw , so that Wtotal achieves the mini-
mum value. Since the size of set Scw is 2nm , finding an
optimal offloading strategy is NP-hard. Therefore, the
approximate optimal unloading policy function ∼  is
obtained by training the DNN, such that  : S ← D , as
Eq. (9):

(9)
( )
arg minΦ(S) = arg minWtotal (D, Sci wj ), i ∈ n;j ∈ m

where D is the input of the model. The function value is


Algorithm 1 DTOO algorithm
optimized by performing a gradient descent algorithm
during training by minimizing the cross-entropy loss Algorithm  1 shows the optimization process of task
function Eq. (10): offloading strategy based on deep neural network. The
   input is the dataset D of the amount of data contained
L(�) = − Slog Ŝ + (1 − S)log 1 − Ŝ (10) in the crowd task. The output is the approximate opti-
mal task offloading strategy Scw . Line1 traverses all
The DTOO algorithm proposed in this paper is task data volumes in D. Line2, Line3 Input dci into ∼
shown in Algorithm 1. all DNNs to get candidate offloading strategies S  .
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 7 of 12

Line4-Line6 selects the optimal offloading strategy


that minimizes L(�) and stores it in memory for the
next round of training. Line7-Line9, sample data from
memory for the next round of training, and update
L(�) . The training of Line10 and Line11 ends, and the
approximate optimal unloading strategy Scw is finally
obtained.

Resource allocation algorithm


When crowd workers offload computing tasks to edge
services, computing tasks are added to the stack in
order based on priority. The administrator first matches
the task at the top of the task stack with an address in
the address stack with a high idle level. Then, when the
address has fewer resources after a round of task alloca-
tion, the administrator will allocate the task to the next
address with a higher idle level. The implementation pro-
cess is shown in Algorithm 2.

Fig. 2  Mission information

Algorithm 2 Resource allocation algorithm


Experiment and analysis
Dataset and experiment setup
Algorithm  2 demonstrates the stack-based and sorted The dataset used in the experiments is from the
resource allocation process. The input is task offloading research-based general spatial crowdsourcing platform
policy Sci wj and edge server set ES. The output is the task gMission [56]. As shown in Fig. 2, in the gMission data-
stack, the address stack. Line1-Line6 judges whether the set, each task contains its location, release time, and
task is offloaded to the edge server and obtains the task due date. The worker’s data includes their location, and
set Cedge that is offloaded to the edge server. Line7-Line8 the time it takes to complete the task. Since the gMis-
prioritizes tasks according to their maximum allowable sion dataset has a large period and a large amount of
delay. Line9-Line10 sorts the addresses according to the data, this paper selects 48 hours of data records from it.
idle class of edge servers. In Line11-Line14, when a new It contains 500 crowd workers, 1500 crowd tasks, and
task is added, determine whether the total delay of the 29,654 worker check-in messages. The parameters used
previous task is greater than that of the task, and if it is in the experiments include crowd workers, crowd tasks,
greater, assign the task to other idle addresses. Line15 and edge servers. In this paper, the number of tasks
returns the task stack and address stack for the next accepted by workers is set to be no more than three at
round of resource allocation. most. The data processing speed of the mobile devices
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 8 of 12

Table 2  Experimental parameter settings The experiments in this paper are all implemented in
Variable Value
the Python environment. A laptop with Intel(R) Core
(TM) i7-10750H CPU and 16GB memory was used.
rlocal 1.50 × 107 bit/s
qlocal 6.60 × 108 J/bit
Performance evaluation and comparative test
xedge 1.25 × 108 bit/s
Figure 3 shows the convergence performance of the algo-
redge 8.38 × 108 bit/s
rithm under different learning rates. From Fig.  3(a) and
yedge 7.81 × 109 J/bit (b), it can be seen that when batch=64, the algorithm has
qedge 8.19 × 109 bit/s the best convergence effect. In Fig. 3(c), the abscissa is the
training step size, and the ordinate is the total resource
consumption. It can be seen from Fig. 3 that the learning
carried by workers is 1.50 × 107 bit/s  . The consump- rate is too high or too low to achieve a good convergence
tion of mobile devices processing data is 6.6 × 108 J /bit  . effect. When the learning rate is 0.01, the convergence
In addition, this paper sets the number of edge servers effect is better when 0.001. In Fig. 3(d), the abscissa is the
within the task scope to five. The detailed parameter set- training step size, and the ordinate is the gain rate. It can
tings are shown in Table 2.

Fig. 3  Performance of the algorithm under different hyperparameters


Meng et al. Journal of Cloud Computing (2023) 12:76 Page 9 of 12

Fig. 4  Effect of optimizer and number of DNNs on performance

be seen from the figure that the best effect is when the Fig. 4(b) that the convergence performance of the Adam
learning rate is 0.01. Therefore, the batch is set to 64 and optimizer is the best.
the learning rate is set to 0.01 in the experiments. In order to verify the superiority of the proposed algo-
The effect of the number of DNNs on the convergence rithm in terms of task completion rate, this paper con-
effect of the algorithm can be seen in Fig.  4(a). When ducts a comparison experiment with the RC [57] and LRU
there is only one DNN, the mutual learning between [58] algorithms and adds the traditional arrival time task
DNNs cannot be performed, so the algorithm cannot ranking (ATR) algorithm for comparison. As shown in
converge. When the number of DNNs is greater than 1, Fig. 5, the task completion rates of the four algorithms all
the gain rate increases as the number of DNNs increases, decrease as the number of tasks increases, because when
and the convergence requires fewer steps. Therefore, the number of tasks increases, the newly added tasks may
multiple DNNs can be selected for model training under not be allocated computing resources in time, resulting
the premise of hardware equipment. It can be seen in in task failure. It can be seen from Fig. 5 that the DTOO

Fig. 5  Task completion rate under different number of tasks


Meng et al. Journal of Cloud Computing (2023) 12:76 Page 10 of 12

Fig. 6  Total resource consumption under different number of tasks

algorithm proposed in this paper has the highest task through learning among multiple neural units. It aims
completion rate because in DTOO, tasks and edge server at solve the conflict between resource consumption
addresses are arranged in the stack according to their pri- and quality of service in practical MCS applications. To
orities so that tasks with higher priorities can obtain more improve the completion rate of tasks, this paper pro-
computing resources, thus improving the task completion posed a stack-based resource sorting method. The tasks
rate. It can be seen that the algorithm proposed in this are arranged in the task stack according to their prior-
paper is superior in terms of task completion rate. ity, and the server addresses are arranged in the address
In order to verify the superiority of the proposed stack according to the idle level. After the task unload-
algorithm in terms of resource consumption, this paper ing is completed, according to the priority of the task for
conducts comparison experiments with Deep Q-Net- task allocation, thereby improving the task completion
work [45], MUMTO [59], Greedy algorithm, adds only rate. Finally, performance tests and comparison experi-
local processing tasks and all tasks are offloaded to the ments are carried out on the research-based general
Edge Server. As shown in Fig. 6, it is clear that the total spatial crowdsourcing platform gMission dataset, and it
resource consumption of all algorithms increases with is verified that the algorithm proposed in this paper has
the number of tasks. If all tasks are offloaded to edge good performance in terms of balancing resource con-
servers for processing, high resource consumption will sumption and service quality as well as task completion
occur. The DTOO algorithm proposed in this paper rate. In the future work, the influence of workers’ pref-
is similar in utility to the Greedy algorithm, but since erences on task offload decisions will be considered, and
the Greedy algorithm will enumerate all the offload- the task offload strategy will be further optimized.
ing strategies, it will occupy a considerable amount of
system memory. Therefore, the algorithm proposed in
Abbreviations
this paper is superior in resource consumption. MCS Mobile CrowdSourcing
DNN Deep Neural Network
DTOO Deep Neural Network-based Task Offloading Optimizatio
IoT Internet of Things
Conclusion EH Energy Harvesting
This paper focuses on the task offloading problem of MINLP Mixed Integer Nonlinear Programming
MCS in an edge cloud environment. Aiming at the GCGH Gini Coefficient-based Greedy Heuristic
ATR​ Arrival Time task Rankin
problem of task offloading strategy selection, this paper RC tRend Caching
proposed a DTOO algorithm based on DNN, which LRU Least Recently Used
obtains an approximate optimal offloading strategy MUMTO Multi-User Multi-Task Optimizatio
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 11 of 12

Author’ contributions Received: 22 October 2022 Accepted: 25 April 2023


M. L. wrote the main manuscript text . W. Y. modified and reviewed the
paper. W. H. , T. X. , S. Z. and C. Z. participated in sorting out references. All
authors reviewed the manuscript. The author(s) read and approved the final
manuscript.
References
Authors’ information 1. Wu Y, Zeng JR, Peng H, Chen H, Li C (2016) Survey on incentive mecha-
Lingkang Meng received his Bachelor degree in School of Computer and nisms for crowd sensing. J Softw 27(8):2025–2047
Control Engineering from Yantai University. He is a graduate student in School 2. Cai Z, He Z (2019) Trading private range counting over big IoT data. In:
of Computer and Control Engineering at Yantai University. His research inter- 2019 IEEE 39th International Conference on Distributed Computing
ests are mobile crowd sourcing and service computing. Systems (ICDCS). IEEE, Dallas, p 144–153
Yingjie Wang received the received the Ph.D. degree in College of Computer 3. Zheng X, Cai Z (2020) Privacy-preserved data sharing towards multiple
Science and Technology from Harbin Engineering University. She visited Geor- parties in industrial IoTs. IEEE J Sel Areas Commun 38:968–979
gia State University from 2013/09 to 2014/09 as a visiting scholar. Dr. Wang is 4. Xiang C, Zhou Y, Dai H, Qu Y, He S, Chen C, Yang P (2021) Reusing delivery
currently an Professor in the School of Computer and Control Engineering at drones for urban crowdsensing. IEEE Trans Mob Comput. https://​doi.​org/​
Yantai University. She is a Postdoc in South China University of Technology. Her 10.​1109/​TMC.​2021.​31272​12
research interests are mobile crowdsourcing, privacy protection and service 5. Lu Z, Wang Y, Li Y, Tong X, Mu C, Yu C (2021) Data-driven many-objective
computing. She has published more than 60 papers in well known journals crowd worker selection for mobile crowdsourcing in industrial IoT. IEEE
and conferences in her research field, which includes 2 ESI high cited papers. Trans Ind Inform. https://​doi.​org/​10.​1109/​TII.​2021.​30768​11
In addition, she has presided 2 National Natural Science Foundation of China 6. Sandhu AK (2021) Big data with cloud computing: Discussions and chal-
project, 2 China Postdoctoral Science Foundation projects. Dr. Wang obtained lenges. Big Data Min Anal 5:32–40
the Shandong Province Artificial Intelligence Outstanding Youth Award. 7. Duan Z, Li W, Zheng X, Cai Z (2019) Mutual-preference driven truthful
Haipeng Wang received the Ph.D. degree from Naval Aviation University, auction mechanism in mobile crowdsensing. In: 2019 IEEE 39th Interna-
in 2012, where he is currently an Professor. His research interests include the tional Conference on Distributed Computing Systems (ICDCS). IEEE, Dal-
general area of intelligent perception and fusion, and big data technology and las, p 1233–1242
application. He also serves as a Reviewer for several distinguished journals, 8. Hasenfratz D, Saukh O, Sturzenegger S, Thiele L et al (2012) Participatory
including IET RSN and IEEE AES. air pollution monitoring using smartphones. Mob Sens 1:1–5
Xiangrong Tong received the Ph.D. degree in School of Computer and 9. Brković M, Sretović V (2013) Smart solutions for urban development:
Information Technology from Beijing Jiaotong University. Currently, he is a potential for application in serbia. In: Congress Proceedings. Regional
Full Professor of Yantai University. His research interests are computer science, Development, Spatial Planning and Strategic Governance (RESPAG) 2nd
intelligent information processing and social networks. He has published International Scientific Conference, Belgrade. IAUS, Belgrade
more than 50 papers in well known journals and conferences. In addition, he 10. Libelium (2017). http://​www.​libel​ium.​com/. Accessed 2022
has presided and joined 3 national projects and 3 provincial projects. 11. Wang Y, Cai Z, Tong X, Gao Y, Yin G (2018) Truthful incentive mechanism
Zice Sun received the Bechelor degree in the School of Computer and Con- with location privacy-preserving for mobile crowdsourcing systems.
trol Engineering, Yantai University. He is currently pursuing the Master degree Comput Netw 135:32–43
in the School of Computer and Control Engineering, Yantai University. His 12. Qi L, Liu Y, Zhang Y, Xu X, Bilal M, Song H (2022) Privacy-aware point-of-
research interests are mobile crowdsourcing and blockchain. interest category recommendation in internet of things. IEEE Internet
Zhipeng Cai received his PhD and M.S. degrees in the Department of Com- Things J. https://​doi.​org/​10.​1109/​JIOT.​2022.​31811​36
puting Science at University of Alberta, and B.S. degree from Beijing Institute 13. Li F, Wang Y, Gao Y, Tong X, Jiang N, Cai Z (2021) Three-party evolution-
of Technology. Dr. Cai is currently an Professor in the Department of Computer ary game model of stakeholders in mobile crowdsourcing. IEEE Trans
Science at Georgia State University. Dr. Cai’s research areas focus on Network- Comput Soc Syst. https://​doi.​org/​10.​1109/​TCSS.​2021.​31354​27
ing, Privacy and Big data. Dr. Cai is the recipient of an NSF CAREER Award. Dr. 14. Chi C, Wang Y, Tong X, Siddula M, Cai Z (2021) Game theory in internet of
Cai is now a Steering Committee Co-Chair for WASA. He is an editor/guest things: A survey. IEEE Internet Things J. https://​doi.​org/​10.​1109/​JIOT.​2021.​
editor for Algorithmica, Theoretical Computer Science, Journal of Combina- 31336​69
torial Optimization, IEEE/ACM Transactions on Computational Biology and 15. Chen Y, Zhao J, Wu Y et al (2022) Qoe-aware decentralized task offloading
Bioinformatics. He is a senior member of the IEEE. and resource allocation for end-edge-cloud systems: A game-theoretical
approach. IEEE Trans Mob Comput. https://​doi.​org/​10.​1109/​TMC.​2022.​
Funding 32231​19
This work was supported in part by the National Natural Science Foundation 16. Xiang C, Yang P, Wu X, He H, Wang B, Liu Y (2015) istep: A step-aware
of China under Grant 62272405, the Youth Innovation Science and Technology sampling approach for diffusion profiling in mobile sensor networks. IEEE
Support Program of Shandong Provincial under Grant 2021KJ080, the Natural Trans Veh Technol 65:8616–8628
Science Foundation of Shandong Province, Grant ZR2022MF238, Yantai 17. Chen Y, Gu W, Li K (2022) Dynamic task offloading for internet of things in
Science and Technology Innovation Development Plan Project under Grant mobile edge computing via deep reinforcement learning. Int J Commun
2021YT06000645, the Open Foundation of State key Laboratory of Network- Syst 5154
ing and Switching Technology (Beijing University of Posts and Telecommuni- 18. Kong L, Wang L, Gong W, Yan C, Duan Y, Qi L (2021) Lsh-aware multitype
cations) under Grant SKLNST-2022-1-12. health data prediction with privacy preservation in edge environment.
World Wide Web 1–16
Availability of data and materials 19. Qi L, Lin W, Zhang X, Dou W, Xu X, Chen J (2022) A correlation graph
The gMission dataset: http://​gmiss​ion.​github.​io/. based approach for personalized and compatible web apis recommen-
dation in mobile app development. IEEE Trans Knowl Data Eng. https://​
doi.​org/​10.​1109/​TKDE.​2022.​31686​11
Declarations 20. Qi L, Yang Y, Zhou X, Rafique W, Ma J (2021) Fast anomaly identification
based on multi-aspect data streams for intelligent intrusion detection
Ethics approval and consent to participate
toward secure industry 4.0. IEEE Trans Ind Inform. https://​doi.​org/​10.​1109/​
Not applicable.
TII.​2021.​31393​63
21. Chen Y, Xing H, Ma Z, Chen X, Huang J (2022) Cost-efficient edge caching
Consent for publication
for noma-enabled IoT services. China Commun
Not applicable.
22. Li K, Zhao J, Hu J et al (2022) Dynamic energy efficient task offloading and
resource allocation for noma-enabled IoT in smart buildings and environ-
Competing interests
ment. Build Environ. https://​doi.​org/​10.​1016/j.​build​env.​2022.​109513
The authors declare no competing interests.
Meng et al. Journal of Cloud Computing (2023) 12:76 Page 12 of 12

23. Huang J, Gao H, Wan S et al (2023) Aoi-aware energy control and compu- 49. Chen Y, Gu W, Xu J, et al (2022) Dynamic task offloading for digital twin-
tation offloading for industrial IoT. Futur Gener Comput Syst 139:29–37 empowered mobile edge computing via deep reinforcement learning.
24. Chen X, Jiao L, Li W, Fu X (2015) Efficient multi-user computation offload- China Commun. https://​doi.​org/​10.​1002/​dac.​5154
ing for mobile-edge cloud computing. IEEE/ACM Trans Networking 50. Mnih V, Kavukcuoglu K, Silver D, Rusu A, Veness J, Bellemare M, Graves
24:2795–2808 A, Riedmiller M, Fidjeland A, Ostrovski G (2015) Human-level control
25. Huang L, Feng X, Zhang C, Qian L, Wu Y (2019) Deep reinforcement through deep reinforcement learning. Nature 518(7540):529–533
learning-based joint task offloading and bandwidth allocation for multi- 51. Huang J, Wan J, Lv B, Ye Q et al (2023) Joint computation offloading and
user mobile edge computing. Digit Commun Netw 5:10–17 resource allocation for edge-cloud collaboration in internet of vehicles
26. Bi R, Liu Q, Ren J, Tan G (2020) Utility aware offloading for mobile-edge via deep reinforcement learning. IEEE Syst J. https://​doi.​org/​10.​1109/​
computing. Tsinghua Sci Technol 26:239–250 JSYST.​2023.​32492​17
27. Zhang Q, Wang Y, Yin G, Tong X, Sai AMVV, Cai Z (2022) Two-stage bilateral 52. Xu Z, Wang Y, Tang J, Wang J, Gursoy MC (2017) A deep reinforcement
online priority assignment in spatio-temporal crowdsourcing. IEEE Trans learning based framework for power-efficient resource allocation in
Serv Comput 8:516–530 cloud rans. In: 2017 IEEE International Conference on Communications
28. Wang Y, Cai Z, Zhan ZH, Zhao B, Tong X, Qi L (2020) Walrasian equilibrium- (ICC). IEEE, Paris, p 1–6
based multiobjective optimization for task allocation in mobile crowd- 53. Ye H, Li G, Juang B (2017) Power of deep learning for channel estima-
sourcing. IEEE Trans Comput Soc Syst 7(4):1033–1046 tion and signal detection in ofdm systems. IEEE Wirel Commun Lett
29. Chen Y, Hu J, Zhao J, Min G (2023) Qos-aware computation offloading in 7(1):114–117
leo satellite edge computing for IoT: A game-theoretical approach. Chin J 54. He Z Yand Zhang, Yu F, Zhao N, Yin H, Leung V, Zhang Y (2017) Deep-
Electron. https://​doi.​org/​10.​1109/​TMC.​2022.​32231​19 reinforcement-learning-based optimization for cache-enabled oppor-
30. Shi W, Cao J, Zhang Q, Li Y, Xu L (2016) Edge computing: Vision and chal- tunistic interference alignment wireless networks. IEEE Trans Veh Technol
lenges. IEEE Internet Things J 3:637–646 66(11):10433–10445
31. Sun Z, Wang Y, Cai Z, Liu T, Tong X, Jiang N (2021) A two-stage privacy 55. Huang L, Feng X, Qian L, Wu Y (2018) Deep reinforcement learning-based
protection mechanism based on blockchain in mobile crowdsourcing. task offloading and resource allocation for mobile edge computing. In:
Int J Intell Syst (36-5). https://​doi.​org/​10.​1002/​int.​22371 International Conference on Machine Learning and Intelligent Communi-
32. Liu T, Wang Y, Li Y, Tong X (2020) Privacy protection based on cations. MLICOM, Hangzhou, p 33–42
stream cipher for spatio-temporal data in IoT. IEEE Internet Things J 56. gmission dataset. http://​gmiss​ion.​github.​io/. Accessed 2022
7(9):7928–7940 57. Li S, Xu J, van der Schaar M, Li W (2016) Trend-aware video caching
33. Cai Z, Zheng X (2018) A private and efficient mechanism for data upload- through online learning. IEEE Trans Multimed 18(12):2503–2516
ing in smart cyber-physical systems. IEEE Trans Netw Sci Eng 7(2):766–775 58. Jin W, Li X, Yu Y, Wang Y (2013) Adaptive insertion and promotion
34. Wang T, Lu Y, Cao Z, Shu L, Zheng X, Liu A, Xie M (2019) When sensor- policies based on least recently used replacement. IEICE Trans Inf Syst
cloud meets mobile edge computing. Sensors 19(23):5324 96(1):124–128
35. Zhao W, Liu J, Guo H, Hara T (2018) Etc-IoT: Edge-node-assisted transmit- 59. Chen MH, Liang B, Dong M (2016) Joint offloading decision and resource
ting for the cloud-centric internet of things. IEEE Netw 32(3):101–107 allocation for multi-user multi-task mobile cloud. In: 2016 IEEE Interna-
36. Cai Z, Xiong Z, Xu H, Wang P, Li W, Pan Y (2021) Generative adversarial tional Conference on Communications (ICC). IEEE, Paris, p 1–6
networks: A survey toward private and secure applications. ACM Comput
Surv (CSUR) 54(6):1–38
37. Ren J, Yu G, He Y, Li GY (2019) Collaborative cloud and edge computing Publisher’s Note
for latency minimization. IEEE Trans Veh Technol 68(5):5031–5044 Springer Nature remains neutral with regard to jurisdictional claims in pub-
38. Wang W, Wang Y, Duan P, Liu T, Tong X, Cai Z (2022) A triple real-time lished maps and institutional affiliations.
trajectory privacy protection mechanism based on edge computing and
blockchain in mobile crowdsourcing. IEEE Trans Mob Comput 1–18
39. Xiang C, Zhang Z, Qu Y, Lu D, Fan X, Yang P, Wu F (2020) Edge computing-
empowered large-scale traffic data recovery leveraging low-rank theory.
IEEE Trans Netw Sci Eng 7(4):2205–2218
40. Xiang C, Li Y, Zhou Y, He S, Qu Y, Li Z, Gong L, Chen C (2022) A compara-
tive approach to resurrecting the market of mod vehicular crowdsensing.
In: Proc. IEEE Conf. Comput. Commun. IEEE, London, p 1–10
41. Xiang C, Yang P, Tian C, Zhang L, Lin H, Xiao F, Zhang M, Liu Y (2015) Carm:
Crowd-sensing accurate outdoor rss maps with error-prone smartphone
measurements. IEEE Trans Mob Comput 15(11):2669–2681
42. Wang Y, Gao Y, Li Y, Tong X (2020) A worker-selection incentive mecha-
nism for optimizing platform-centric mobile crowdsourcing systems.
Comput Netw 171(107):144
43. Dinh T, Tang J, La Q, Quek T (2017) Offloading in mobile edge computing:
Task allocation and computational frequency scaling. IEEE Trans Commun
65(8):3571–3584
44. Wu H, Sun Y, Wolter K (2018) Energy-efficient decision making for mobile
cloud offloading. IEEE Trans Cloud Comput 8(2):570–584
45. Xu J, Chen L, Zhou P (2018) Joint service caching and task offloading for
mobile edge computing in dense networks. In: IEEE INFOCOM 2018-IEEE
Conference on Computer Communications. IEEE, Honolulu, p 207–215
46. Cand ShuZ, Zhao Han Y, Min G, Duan H (2019) Multi-user offloading for
edge computing networks: A dependency-aware and latency-optimal
approach. IEEE Internet Things J 7(3):1678–1689
47. Mao Y, Zhang J, Letaief K (2016) Dynamic computation offloading for
mobile-edge computing with energy harvesting devices. IEEE J Sel Areas
Commun 34(12):3590–3605
48. Zhao P, Tian H, Qin C, Nie G (2017) Energy-saving offloading by jointly
allocating radio and computational resources for mobile edge comput-
ing. IEEE Access 5:11255–11268

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy