Abstract: With the development of cloud computing, more and more users start to utilize the cloud storage service. However, there exist some issues: 1) cloud server steals the shared data, 2) sharers collude with the cloud server to steal the shared data, 3) cloud server tampers the shared data, 4) sharers and key generation center (KGC) conspire to steal the shared data. In this paper, we use advanced encryption standard (AES), hash algorithms, and accountable key-policy attribute-based encryption without key escrow (WOKE-AKP-ABE) to build a security cloud storage scheme. Moreover, the data are encrypted to protect the privacy. We use hash algorithms to prevent the cloud server from tampering the data uploaded to the cloud. Analysis results show that this scheme can resist conspired attacks.
Abstract: In recent years, it has been proposed security
architecture for sensor network.[2][4]. One of these, TinySec by Chris
Kalof, Naveen Sastry, David Wagner had proposed Link layer security
architecture, considering some problems of sensor network. (i.e :
energy, bandwidth, computation capability,etc). The TinySec employs
CBC_mode of encryption and CBC-MAC for authentication based on
SkipJack Block Cipher. Currently, This TinySec is incorporated in the
TinyOS for sensor network security.
This paper introduces TinyHash based on general hash algorithm.
TinyHash is the module in order to replace parts of authentication and
integrity in the TinySec. it implies that apply hash algorithm on
TinySec architecture. For compatibility about TinySec, Components
in TinyHash is constructed as similar structure of TinySec. And
TinyHash implements the HMAC component for authentication and
the Digest component for integrity of messages. Additionally, we
define the some interfaces for service associated with hash algorithm.
Abstract: In this paper, a novel multi join algorithm to join
multiple relations will be introduced. The novel algorithm is based
on a hashed-based join algorithm of two relations to produce a double index. This is done by scanning the two relations once. But
instead of moving the records into buckets, a double index will be built. This will eliminate the collision that can happen from a complete hash algorithm. The double index will be divided into join
buckets of similar categories from the two relations. The algorithm then joins buckets with similar keys to produce joined buckets. This
will lead at the end to a complete join index of the two relations. without actually joining the actual relations. The time complexity
required to build the join index of two categories is Om log m where m is the size of each category. Totaling time complexity to O n log m
for all buckets. The join index will be used to materialize the joined relation if required. Otherwise, it will be used along with other join
indices of other relations to build a lattice to be used in multi-join operations with minimal I/O requirements. The lattice of the join indices can be fitted into the main memory to reduce time complexity of the multi join algorithm.