Data Storage Protocol For Cloud Security

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

CHAPTER VI

DATA STORAGE PROTOCOL FOR CLOUD SECURITY

6.1 INTRODUCTION
Data Privacy and Verification in cloud have been handled extensively in many
existing works. On surveying the field of public auditability, it is evident that
considering the third party auditor as vulnerable component is not addressed
anywhere. The previous works do not address all the security threats and all are
focusing on single server scenario. Most of them do not consider dynamic data
operations and the problem of supporting both public auditability and dynamism have
been recently addressed where the data is vulnerable in the hands of third party
auditor. The DSP Protocol is proposed to provide security to the data in cloud
environment, remotely verifying the data and possession on server.

Figure 24: Third party data auditing setup in cloud

6.2 Proposed Work


The Data Storage Protocol (DSP) is developed to provide more security to the data
which is stored in cloud environment by remotely verifying the data and possession
on server. DSP protocol allows users to obtain a probabilistic proof from the storage
service providers. Such a proof will be used as evidence that their data have been
stored there. One of the advantages of this protocol is that the proof could be
generated by the storage service provider by accessing even a small portion of the
whole dataset. The data owner executes the protocol to verify that a dataset is stored
in a server machine as a collection of n blocks. Before uploading the data into the
remote storage, the data owner pre-processes the dataset and a piece of metadata is
generated. The metadata are stored at the data owner’s side and the dataset will be
transmitted to the storage server. The cloud storage service stores the dataset and
sends the data to the user in responding to queries from the data owner in the future.
The data owner (client) may conduct operations on the data such as expanding the
data or generating additional metadata to be stored at the cloud server side. The data
owner could execute the Data Storage Protocol (DSP) before the local copy is deleted
to ensure that the uploaded copy has been stored at the server machines successfully.
Two methods are used for DSP
1. Metadata Generation
2. Metadata Verification
This work solves the issue of restricting the third party auditor from accessing the data
openly. Data Storage Protocol (DSP) is designed for the flowing purpose which gives
access only to the (owner) metadata of the data being verified.

6.2.1 Purpose of DSP Protocol


(a) Remote Data Possession at Untrusted Stores
This work states that cloud storage can achieve the goal that getting all storage
resources in a plug-and-play way, becomes focus of attention. When users store their
data in cloud storage, they mostly concern about whether the data is intact. The goal
of DSP protocol is remotely check the data possession. Here we proposed an efficient
Remote Data Possession Check (RDPC) scheme which has several advantages as
follows.
1. It is efficient in terms of computation and communication.
2. It allows verification without the need for the challenger to compare against
the original data.
3. It uses only small challenges and responses to users need to store only two
secret keys and several random numbers. Finally, a challenge updating method
is proposed based on Euler’s theorem.
(b) Public Verifiability for Storage Security
This work states that by data outsourcing, users can be relieved data from the local
data storage. It also eliminates their physical control of storage dependability and
security, which traditionally has been expected by both enterprises and individuals.
This unique paradigm brings about many new security challenges, which need to be
clearly understood and resolved. This work studies the problem of ensuring the
integrity of data storage in Cloud Computing. To ensure the correctness of data, we
consider the task of allowing a third party auditor, on behalf of the cloud consumer, to
verify the integrity of the data stored in the cloud. DSP protocol ensures that the
storage at the client side is minimal which will be beneficial for clients.
(c) Public Auditability for Storage Security
This is the problem of ensuring the integrity of data storage in cloud Computing. It
considers the task of allowing a third party auditor, to verify the integrity of the
dynamic data stored in the cloud. This achieves both public auditability and dynamic
data operations. It first identifies the difficulties and potential security problems of
direct extensions with fully dynamic data updates from prior works and then shows
how to construct an elegant verification scheme for the seamless integration of these
two salient features in our protocol design.
(d) Remote Data Checking Using Provable Data Possession
Introduces a model for provable data possession, that can be used for remote data
checking. The model generates probabilistic proofs of possession by sampling random
sets of blocks from the server, which drastically reduces I/O costs. The
challenge/response protocol transmits a small, constant amount of data, which
minimizes network communication. The model is also robust and incorporates
mechanisms for mitigating arbitrary amounts of data corruption. It presents two
provably-secure PDP schemes that are more efficient than previous solutions. In
particular, the overhead at the server is low (or even constant), as opposed to linear in
the size of the data. It proposes a generic transformation that adds robustness to any
remote data checking scheme based on spot checking and conducts an in-depth
experimental evaluation to study the tradeoffs in performance, security and space
overheads when adding robustness to the remote data checking scheme.
(e) Privacy Preserving Data Integrity Checking
Third-party auditor allows to periodically verify the data stored by a service and assist
in returning the data intact to the customer. The protocols are privacy-preserving i.e. it
never reveals the data contents to the auditor. This solution removes the burden of
verification from the customer, alleviates both the customer’s and storage service’s
fear of data leakage and provides a method for independent arbitration of data
retention contracts. The solution provides storage service accountability through
independent, third-party auditing and arbitration. The protocols have three important
operations initialization, audit and extraction. It primarily focuses on the audit and
extraction. For audits, the auditor interacts with the service to check that the stored
data is intact. For extraction, the auditor interacts with the service and customer to
check that the data is intact and return it to the customer.
All the above mentioned auditing schemes have achieved the support for dynamic
data updates which is a critical need in environments like cloud where huge volumes
of data are updated frequently. These verification schemes do not consider the effects
on data privacy in the hands of a third party auditor.

6.3 Data Storage Protocol


The security model is basically designed as a data integrity scheme that supports
integration of both public auditability and dynamic updates. The scheme verifies
metadata rather than the actual data. The model is divided into two fundamental
blocks.
(a) Metadata Generation and
(b) Metadata Verification

(a) Metadata Generation


The process is initiated with the generation of a public key parameter Pk by the cloud
client. Then the client generates a signature for individual file blocks. The signature is
a form of metadata which is a combination of public key and file blocks are called as
codes. Finally the generated metadata is transmitted to the cloud storage.

Figure 25: Metadata Generation


(b) Metadata Verification
Once the metadata has been forwarded to the cloud, the Third Party Audition (TPA)
can perform data verification as in anytime. When the Third Party Audition (TPA)
receives a request from the client for data verification, it sends an audit message to the
service provider asking for a set of data blocks. The audit message contains the
position of the blocks requested. The service provider makes a linear combination of
blocks and applies a mask. The service provider sends the authenticator and masked
blocks to the Third Party Audition (TPA). Finally the Third Party Audition (TPA)
compares the masked blocks from service provider and the metadata from the client.

Figure 26: Metadata Verification

DSP Protocol Algorithm


Begin
Metadata gen( )
Step 1: Initiate block splitting of the File F.
Step 2: Generate a public key Pk.
Step 3: Generate authentication codes for each block using the key.
Step 4: Transmit authentication codes along with file blocks to the cloud.
End
Begin
metadata verify( )
Step 5: Generate an audit message containing position of file blocks and send it to
CSP.
Step 6: Forward the response message containing metadata of requested blocks to
TPA.
Step 7: Compare metadata from CSP and Client.
End

Data Storage Protocol Equations


Pk be the public key for encrypting the file blocks.
σ be the code generated for each block a.k.a metadata.
F be the actual file needs to be verified.
B be the single block of file.
Equation 1. File is divided into blocks

Equation 2. Code generated for each block

Equation 3. Blocks and Codes are transferred to Cloud

CSP

Equation 4. TPA sends audit message to CSP

Equation 5. CSP sends masked blocks TPA sends audit message to CSP

Equation 6. CSP sends masked blocks TPA sends audit message to CSP
Operation

Figure 27: Metadata Verification Operations

6.4 Implementation
(a) Operating Systems Metadata
Operating system maintains the metadata to authenticate the user, for that an account
will be created or call the batch file by creating an object. This account enables the
user to connect with Metadata Server. Metadata Server can grant or deny the user
access of metadata objects based on authentication.
The following are the steps for creating a metadata object.
1 Select Start ► Settings ► Control Panel ► Administrative Tools ► Local
Security Policy.

2. In the Local Security Settings window, expand Local Policies in the left pane
and click User Rights Assignment.
(Sample for Metadata Creation)

Figure 28: User Rights Assignment

3. Click Log on as a batch job in the right pane to display the Log on as a
batch job Properties dialog box.

Figure 29: Batch job Properties


4. On the Local Security Setting tab, click Add User or Group to display the
Select Users or Groups dialog box. Enter your information in the fields and click
OK to return to the Log on as a batch job Properties dialog box.

Figure 30: Create User or Group of Metadata

5. Make sure your new group appears in the box on the Local Security
Setting tab and click OK.

Figure 31: New Group Appears


(b) Verify Metadata
1. Open Enterprise Guide.
2. Select Tools ► Options ► Administration ► Repository and Server.
In the Repository and Server dialog box (click Manage to display the Repository
Manager).
3. In the Repository Manager, select the IT Config Metadata Repository listing. Then
click Modify to display the Modify Repository dialog box.

Sample for Verification

Figure 32: Verify Metadata

Figure 33: Metadata Repository


4. In the Modify Repository dialog box, verify that the repository is defined as
follows:
∑ Machine: The Remote radio button is checked and the Metadata Server’s
machine name has been typed in.
∑ Port: 8561
∑ User ID: Use DEMO for testing or the valid user ID of a production client.
∑ Password: Use the password for the user ID (DEMO or a valid production
client user ID) that you use.
Then click Browse to connect to the Select Repository dialog box.

Figure 34: Metadata Server Details

Figure 35: Metadata Found


5. In the Select Repository dialog box, select Foundation and click OK to return to
the Modify.
6. In the Modify Repository dialog box, click Save, which takes you back to the
Repository.
7. In the Repository Manager, click Set Active. Then close both the Repository
Manager and the Options dialog boxes.

(c) Create a copy of Metadata by XML at user Machine

1. In the Catalog window, right-click the folder where you want to store the
metadata template.

2. Click New > XML Document. A new XML file with the default
name metadata.xml is created in the folder.

3. Type an appropriate name for the metadata template (metadata.xml).

4. Press ENTER.

5. Metadata.xml The file doesn't contain any information to display in


the description tab.

6. Click the Edit button in the description tab.

7. Type in appropriate content for this metadata template.

8. Click the Save button in the description tab. The contents of the metadata
template will be displayed.

Metadata Sample with xml file

Figure 36.1: Metadata from batch Job of O/S


Figure 36.2: Metadata file Create

Figure 36.3: Detail description of Metadata


Upload

Phase 1: Cloud User


Level Security
DOWNLO

CSUS tool
(AES+ Oracle database 12c)
= RSA SQL+ RSA Oracle 10g

Metadata

DSP

Decision

F5 icontrol
F5 Application Delivery Network

DSP PROTOCOL

MULTILEVEL SECURITY, CSP PROTOCOL,


TRUST TABLE

Phase 2:
CSP
Level Security

Figure 37: Architecture for the Proposed Contribution


6.5 Architecture for the Proposed Contribution
The architecture of the Proposed Contribution is shown in the Figure 37 This figure
focuses on the complete security in cloud computing environment at user,provider and
interface level.
It used three-phase system structure, in which each phase performs its own duty to
ensure that the data security of cloud.
1. The first phase: responsible for user authentication, the user of digital certificates
issued by the appropriate, manage user permissions and protect the privacy of users.
In this phase user data is encrypted, even if the key was the illegally accessed, through
privacy protection, malign user will still not be unable to obtain effective access to
information, which is very important to protect business users’ trade secrets in cloud
computing environment, in this phase security is provide through a CSUS tool( cloud
service user security tool).
2. The second phase: responsible for user's and provider can access cloud stored
data only if both have trust value greater then pre-set threshold value.CSP protocol is
developed and use in this phase for provide multilevel security at user and provider
side. Multilevel security framework is also proposed for cross verification.
3. The third phase: The user data can fast recovery, remotely verify data from cloud
server to user machine, data is protected in this phase using DSP protocol. Through
fast recovery algorithm, makes user data be able to get the maximum recovery even in
case of damage.

CONCLUSION
DSP protocol allows users to obtain a probabilistic proof from the storage service
providers. Metadata verification is designed for such a purpose which restricts the
third party to access the metadata of the data to be verified. The Data Storage Protocol
can further be enhanced to check security, auditor’s reliability and confidentiality in
handling the data without biasing. Further the data stored in cloud can be encrypted
and the code generated for individual files can be sent for using secured transmission
protocols.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy