H.C Hsiao, Y.H Lin, et al. [1] projected a study of user-friendly hash that describes some schemes which are quickest and most accurate. In this study, Chinese, Korean and Japanese characters are compared with each other for using them as hash values. It had described the strength and weakness of each scheme. Wang Qian, Cong Wang, Kui Ren, Wenjing Lou and Jin Li [2] explained regarding the storage service provider. Data Storage System would attempt to hide the errors like data lost during relocation, power failure, etc. from the clients for the betterment and maintaining reputation of their own. The service provider might neglect to keep or deliberately delete hardly ever or less accessed data files which belong to an ordinary client for saving maintenance cost and storage space. For blocks, the usage of tag authentication Merkle Hash tree construction makes system more complex and slows the process. Merkle scheme has limited number of possible signatures. Thus the question arises, Can Merkle theme can solve the problems generated through modern applications? Thus there exists a doubt to handle integrity of the bulk data. G. Ateniese, R.C. Burns, R. Curtmola, et al. [3] the proposed purpose behind Provable Data Possession (PDP) was to check the servers, validate the integrity of data which were stored at doubtful servers and find the illegal actions performed like data modification and deletion. It did not consider the dynamic data auditing and various reasons behind data
Specialized techniques for data recovery, evidence authentication and analysis of electronic data far exceeding normal data collection and preservation
The above stated problems are mostly related to the InfoSec principle. InfoSec principle helps in protecting information from unauthorized access, modification, disruption, destruction etc. The confidentiality, integrity, availability, non-repudiation and availability are some of the parameters which fall under InfoSec principles and understanding any security breach such as malware attacks, risk of hackers when studied in light of these parameters, can help in solving the
Objective 3 – A system that can ensure the safety of data from other possible circumstances that may result to corruption and loss of data.
In this Paper we propose a model for provable data possession (PDP) that allows a client to store the data at an entrusted server to check that the server contains the original data without retrieving it. This model produces probabilistic proof of possession by sampling random block sets from the server, which significantly decreases I/O costs. To verify the proof the client maintains a steady amount of metadata. The challenge/response protocol sends a small, constant amount of data, which reduces communication of the network. However, the PDP model for remote data checking supports more data sets in extensively-spreaded storage systems. We propose two provably-secure PDP schemes that are more efficient than previous solution, when compared
Thought of information deduplication was proposed to ensure the information security by including differential power of Data Owner in the copy check. In cloud server our information are safely store in scrambled format,and additionally in Key server our key is store with individual document. The presentation of a couple of beginning deduplication advancements invigorating endorsed copy duplicate in crossover cloud structural engineering, in that the copy check tokens of records are induced by the private cloud server having private keys. Security check shows that the systems are secure with respect to insider and pariah attacks point by point in the proposed security model. As an issue confirmation of origination,
. It uses a Merkle tree-like structure to allow for immense parallel computation of hashes for very long inputs. The design of Merkle tree is based on the claims from Intel describing the future of hardware processors with tens and thousands of cores instead of the conventional uni-core systems. With this in mind, Merkle tree hash structures exploit full potential of such hardware while being appropriate for current uni/dual core architectures. In this tree based
The purpose of anti-forensics is to intentionally make digital investigations and the examination of digital media more difficult through several means including data forgery, data hiding or data deletion. The techniques differ in what they do but the purpose is to make sure data is unrecoverable. (Lucia, 2013)
From the viewpoint of data security, which has always been an important form of quality of service, Cloud computing surely poses new challenging security threats for number of reasons. Firstly, traditional cryptographic primitives for the purpose of data security protection cannot be straight accepted due to users loss control of data under cloud computing. Therefore, verification of correct data storage in the cloud must be attended without accurate knowledge of the whole data. Considering different kind of data for every user stored in the cloud and the requirement of long term continuous security of their data safety, the problem of checking correctness of data storage in the cloud becomes even more challenging.[1]
Authentication is the only method which protects information or data of an individual or organization from a second party to access. Based upon the confidentiality of that particular data or information, the level of authentication depends. Now-a-days, all this data and information what we are talking about is getting digitized all around the world. For this digitized data or information to be secure, a proper authentication procedure must be set. This arise the need for an authentication secret which belongs to the category “Something we know” to come into picture. These secrets authenticate each secret holder as the authorized legitimate user to access their particular account. Technology is
The linear combination of sampled blocks in the server’s response is camouflaged with randomness promulgated by the server. With random masking the TPA no longer has all the mandatory information to accumulation a correct group of linear equations and therefore cannot derive the user’s data content no matter how many linear combinations of the same set of file blocks can be collected. To maintain resourceful handling of various auditing tasks we further look at the technique of bilinear aggregate signature to extend our chief result into a multi-user setting where TPA can do various auditing tasks instantaneously.
In this Chapter, we present the background needed to understand the work proposed in this thesis. As our research mainly concerns with securing provenance, Section 2.1 will cover provenance data and the standard model of its representation as well as the applications of securing provenance. Section 2.2 introduces graph databases that will be used later in storing provenance in our prototype. Section 2.3 explores workflow systems and workflow provenance. Section 2.4 illustrates the main security principles that will be tackled in the thesis and inference problem that attacks privacy. Finally, Section 2.5 summarizes the main content of this chapter.
The concept of the ‘Garbage in and Garbage out’ can be highlighted in this stage because wrong input produces wrong output. So they needed to store the genuine and verified data in some storage device which could be used as an initial input for the new system, adding some new data’s if needed, enacting the data from any destruction and then finally arranging the data for an easy and friendly access to its users.
The Merkle hash tree, the root hash beside with the total size of the file and the piece size are now the only information in the system that needs to come since a trustworthy source. A consumer that has only the root hash of a file can check any piece as follows. It heads computes the hash of the piece it received. Merkle hash tree block that can support full-grained update requests.
Some of the most important procedures used in collection of information to be used in a court of law include collecting live data from the RAMs images. Such live recovery of information can be collected from the F-Response which can collect data from the networks of a computer. Information can be collected when the computer is logged on or connected to the network or when the computer is executing (Carrier, 2006, p. 56). The other procedure that can be used in the collection of information for forensic purposes is the encryption of hard disks. Encryption of the hard disk creates logical images that can be collected using the F-Response (Eoghan & Gerasimos, 2008, p. 95). The other important procedure for collection of information is making sure that all data storage devices are kept away from magnets and any other devices that might destroy data stored in them. It is important that the handling individuals obtain the information collection manuals that help them collect information effectively (Eoghan & Gerasimos, 2008, p. 94).
Some hashing techniques allow the hash function to be modified dynamically to accommodate the growth or shrinking of the database. These are called dynamic hashing. To eliminate these problems, dynamic hashing structures have been proposed.