Dynamic Data Possession in Cloud Computing System Using Fragmentation and Replication

Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Imperial Journal of Interdisciplinary Research (IJIR)

Vol-2, Issue-2 , 2016


ISSN : 2454-1362 , http://www.onlinejournal.in

Dynamic Data Possession In cloud


Computing System Using Fragmentation
and Replication

Kalyankar Amrut Gopalrao1 , Khodade Nittin Balasaheb2 &


Najan Vaibhav Kailas3
1,2&3
Graduate Student, Department of Information Technology Engineering, Genba
Sopanrao Moze College of Engineering, Pune

Abstract Once the data is placed by data owner on cloud


system he /she lost their control from data , hear is the
As increased Database the Data security and new data security and data confidentiality problems
storage of date is very big issue in the database comes in front of Data owner and cloud service provider.
technology to overcome from this the cloud computing Cloud service provider should provide the maximum data
comes in front but the data outsourced on the cloud may security. As much as burden increased on the cloud
create the cloud data security and confidentiality provider so that we purposed this system called Dynamic
problem , to maximize the security in this paper we Data Possession In cloud Computing System Using
provide the data replication and fragmentation technique Fragmentation and Replication, in this system main focus
along with strong guaranty of data security to the cloud is on the cloud data security and providing the assurance
user from the CSP .this technique have following to the cloud users that their data is secured at cloud
properties :1)it provide the maximum security to the data system.
copies stored on cloud ;2)it provide the block level
operation on cloud; 3)all the data is in the encrypted In this system there are three main concepts are used
format; 4)at cloud level data is replicated and this 1) Data replication 2) Data fragmentation and 3) T-
replicated data is hen fragmented; 5)for storage of coloring these are the building blocks of this security
fragmented copies T-coloring algorithm is used . if concern system.
unauthorized user(hacker)is successful in data hacking
still it will not get any useful information due to the each Data Replication is used for the replicating the data
fragmented node contains only limited data in encrypted file of Data owner in the number of parts to save that on
format. different database of same cloud server, by using this we
generate the same copies of data.
Keywords-Cloud service provider (CSP), cloud security
Data Fragmentation worked on the replicated data
, fragmentation, replication, T-coloring.
files by using fragmentation each file is subdivided into
number of small parts ,each part called node contains the
small part of information from data file
1. Introduction
Outsourcing of data on the cloud computing system T-coloring algorithm is used for the placing data
is new trend in the IT field to overcome from issues of fragments or nodes at database at different locations.
Data security, Data maintenance and to reduce the cost
of Data storage equipments . Also the cloud data can be
remotely accessible to every authorized user from any
geographic area.

Imperial Journal of Interdisciplinary Research (IJIR) Page 556


Imperial Journal of Interdisciplinary Research (IJIR)
Vol-2, Issue-2 , 2016
ISSN : 2454-1362 , http://www.onlinejournal.in

2. Literature Survey central database. Each rack hosts at least one server
capable of running local rack-level database (Rack DB),
which is used for replication of data from the datacenter
A. Replication for Improving Availability & database.[3]
Balancing Load in Cloud Data Centers[1]:

A large number of replication strategies for D. Encryption and fragmentation for data
management of replicas have been proposed in literature. confidentiality in the cloud[4]:
As a result of replication, data replicas are stored on
different data nodes for high reliability and availability. Fragmentation consists in splitting the attributes of a
Replication factor for each data block and replica relation R producing different vertical views (fragments)
placement sites need to be decided at first. A replication in such a way that these views stored at external
strategy for region based framework based on the providers do not violate confidentiality requirements
demand of files over a geographically distributed Grid (neither directly nor indirectly). Intuitively,
environment. Access frequency of each file is calculated fragmentation protects the sensitive association
and on that basis it is determined that in which region the represented by an association constraint c when the
replicas need to be placed and the number of replicas attributes in c do not appear all in the same (publicly
need to be placed. When a file is created, the access available) fragment, and fragments cannot be joined by
frequency is calculated for each region and replicas are non authorized users. Note that singleton constraints are
placed in the regions with the large order of the access correctly enforced only when the corresponding
frequency. Number of requests and the response time are attributes do not appear in any fragment that is stored at a
considered as main points for deciding in which site cloud provider.[4]
within the region the file has to be placed. Therefore,
their strategy increases the data availability and also
reduces the number of unnecessary replications[1]
E. Division and Replication of Data in Cloud
for Optimal Performance and Security[5]:
B. Quantitative comparisons of the state of
The node separation is ensured by the means of the T-
the art data center architectures,
coloring. To improve data retrieval time, the nodes are
Concurrency and Computation: Practice selected based on the centrality measures that ensure an
and Experience[2]: improved access time. To further improve the retrieval
time, we judicially replicate fragments over the nodes
The data outsourced to a public cloud must be that generate the highest read/write requests.[5]
secured. Unauthorized data access by other users and
processes (whether accidental or deliberate) must be
prevented . Any weak entity can put the whole cloud at 3. Proposed System
risk. In such a scenario, the security mechanism must
substantially increase an attackers effort to retrieve a The main thing about this system is, we use encrypted
reasonable amount of data even after a successful files while uploading the data file to cloud. There are
intrusion in the cloud. Moreover, the probable amount of main three level operations are carried out that are 1) user
loss (as a result of data leakage) must also be level 2) Admin or cloud level 3) Database level
minimized.[2]
A) At User Level:

C. Energy-efficient data replication in cloud Initially, at user level user select file that have to upload
computing datacenters[3]: on cloud, selected file get encrypted by system and
generate the privet and public key at same time , This
A central database (Central DB), located in the encrypted file or data file upload on cloud server with the
wide-area network, hosts all the data required by the identification of the user. Main task of cloud service
cloud applications. To speed up the access and reduce provider (CSP)is to maintain the all information related
latency, each data center hosts a local database, called to the user and database.
datacenter database (Datacenter DB). It is used to
replicate the most frequently used data items from the B) At cloud server Level:

Imperial Journal of Interdisciplinary Research (IJIR) Page 557


Imperial Journal of Interdisciplinary Research (IJIR)
Vol-2, Issue-2 , 2016
ISSN : 2454-1362 , http://www.onlinejournal.in

1. Copy Generation 3. Database level.

At cloud level identification of user and data file is Finally, with the help of T-Coloring algorithm
carried out and at the admin level the temporary file is cloud service provider stores the file into the database. In
saved for the next operations. cloud server divide cloud database the every fragment of the file is placed on
uploaded file with the help of fragmentation. File is the same server, as like this each server contains one file
divided into n number of fragments. Then this in the form of fragment.
fragmented file is stored sequentially at cloud level. At By using the T-coloring method the each
this stage all the files generated are temporarily stored on fragment is placed at the different location of the same
Admin level server. If there is a successful attack on the server even if
they will not get any important information. The role of
this module to allocate the memory in database for the
fragments. Allocation is used for security and
performance.
In this system we provide the guarantee or
assurance to the user that data is stored on cloud server.
We provide the confidentiality to the user that your data
is secure. For this we use the following modules:

Algorithms:

1: KeyGen (pk, sk)


Fig1. Creation of replicas In this algorithm the after the encryption of the
data file keys are generated that are privet key(pk) and
secrete key(sk)
2. Fragmentation.
2: CopyGen (CNi, F)
The numbers of replicas of this files are depends This algorithm provide the generation of same
on the user demand to maximize the security all this copies or replicas, as the number demanded by the user
replicas are again fragmented in n numbers .each small
node of replica consist of small amount of information
3: TagGen (sk,F):
with the sequence ID all these processes are carried out at
This is used for the providing the tags
the Cloud or Admin level. Each fragment of the replica is
to each fragment of the data file.
still at the cloud level .next stage of this process is on
Database level.
4: PrepareUpdate:
Prepare Update is work on the previous data
stored on the owner side and the update information
related to that

5: ExecuteUpdate ():
This is run by the CSP for this CSP provide the
file copies, tags set and behalf of that output is to updated
tags set

6: Prove ():
This is used by the CSP to ensure the cloud user
that there are exact replicas are generated as mentioned in
the service level agreement

Fig2. Fragmentation of replicas 7: Verify ():

Imperial Journal of Interdisciplinary Research (IJIR) Page 558


Imperial Journal of Interdisciplinary Research (IJIR)
Vol-2, Issue-2 , 2016
ISSN : 2454-1362 , http://www.onlinejournal.in

Prove is user level algorithm used by the authorised user 6. Methodology


of the data by using this cloud provide the assurance to
the user that their information is secured at the cloud Methodology of this system consist of the five main
level. modules Cryptography, Copy Generator, Fragmentation,
Replication and Allocation all these modules we have
8: T-coloring () already discussed at above as the working flow goes
from cloud user to the database and down flow is
As mentioned earlier T-coloring is used for the Database to user while working five modules are used.
database storage management and the storage
arrangement of data fragments at cloud 7. Conclusion
4. Design: We conclude that the user should have guaranty
that the security provided by the CSP to data is maximum
Following system architecture consist of cloud by using this the new cloud security and data
client, cloud server, database as shown in fig. confidentiality system will be provide to the cloud admin
Step1: level. This will be more secure and easy in comparison
In this step user select one file. Then with present cloud security systems.
converted into the encrypted format. Finally, this
encrypted file is uploading on cloud server. 8. Acknowledgement

Step2: Our thanks to our college G.S.M.C.O.E, Savitribai


In this step uploaded file is then converted into Phule Pune university and my department of
the number of fragments. These fragments then Information Technology engineering which has provided
placed on different location on different server. the support and equipment which we have needed to
complete our work. I extend my heartfelt gratitude to my
Step3: guide, Prof.Ashwini Jadhav, and coordinator, Prof.
In this step with the help of T-Coloring Priyanka More who has supported us throughout our
algorithm fragmented file is stored on database research with their patience and knowledge.

9. References
[1] K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A.
Madani, N. Min-Allah, L. Wang, D. Chen, M. Iqbal, C.
Z. Xu, and A. Y. Zomaya, Quantitative comparisons of
the state of the art data center architectures,
Concurrency and Computation: Practice and Experience,
Vol. 25, No. 12, 2013, pp. 1771-1783.

[2] K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li,


and A. Zomaya, On the characterization of the structural
robustness of data center networks, IEEE Transactions
on Cloud Computing, Vol. 1, No. 1, 2013, pp. 64-77.

[3] D. Boru, D. Kliazovich, F. Granelli, P. Bouvry, and


A. Y. Zomaya, Energy-efficient data replication in
cloud computing datacenters, In IEEE Globecom
Workshops, 2013, pp. 446-451. s

[4] Y. Deswarte, L. Blain, and J-C. Fabre, Intrusion


Fig3. Architecuutre of Dynamic Data Possession In tolerance in distributed computing systems, In
cloud Computing System Using Fragmentation and Proceedings of IEEE Computer Society Symposium on
Replication Research in Security and Privacy, Oakland CA, pp. 110-
121, 1991.

Imperial Journal of Interdisciplinary Research (IJIR) Page 559


Imperial Journal of Interdisciplinary Research (IJIR)
Vol-2, Issue-2 , 2016
ISSN : 2454-1362 , http://www.onlinejournal.in

[5] B. Grobauer, T.Walloschek, and E. Stocker,


Understanding cloud computing vulnerabilities, IEEE
Security and Privacy, Vol. 9, No. 2, 2011, pp. 50-5s

Imperial Journal of Interdisciplinary Research (IJIR) Page 560

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy