A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems
Large scale systems such as computational Grid is
a distributed computing infrastructure that can provide globally
available network resources. The evolution of information processing
systems in Data Grid is characterized by a strong decentralization of
data in several fields whose objective is to ensure the availability and
the reliability of the data in the reason to provide a fault tolerance
and scalability, which cannot be possible only with the use of the
techniques of replication. Unfortunately the use of these techniques
has a height cost, because it is necessary to maintain consistency
between the distributed data. Nevertheless, to agree to live with
certain imperfections can improve the performance of the system by
improving competition. In this paper, we propose a multi-layer protocol
combining the pessimistic and optimistic approaches conceived
for the data consistency maintenance in large scale systems. Our
approach is based on a hierarchical representation model with tree
layers, whose objective is with double vocation, because it initially
makes it possible to reduce response times compared to completely
pessimistic approach and it the second time to improve the quality
of service compared to an optimistic approach.
[1] I. Foster and C. Kesselman. The Grid: Blueprint for a New Computing
Infrastructure. Morgan Kauffman Publishers Inc., San Francisco, 1999.
[2] K. Ranganathan and I. Foster. Identifying dynamic replication strategies
for a high-performance data grid. In Springer Berlin, editor, Grid:
Second International Workshop, volume 2242, pages 75-86, Denver,CO,
USA, 12 November 2001.
[3] J. Xu, B. Li, and D. J. Lee. Placement problems for transparent
data replication proxy services. IEEE Journal on Selected Areas in
Communications, 7(20):1383-1398, 2002.
[4] H. Yu and A. Vahdat. Design and evaluation of a conit-based continuous
consistency model for replicated services. ACM Transactions on
Computer Systems, 20(3):239-282, Aug 2002.
[5] P. A. Bernstein and N. Goodman. The failure and recovery problem for
replicated databases. In PODC -83: Proceedings of the Second Annual
ACM symposium on Principles of Distributed Computing, pages 114-
122, New York, NY, USA, 1983. ACM Press.
[6] Y. Saito and M. Shapiro. Optimistic replication. ACM Comput. Surv.,
37(1):42-81, 2005.
[7] H. Yu and A. Vahdat. The costs and limits of availability for replicated
services. In SOSP -01: Proceedings of the Eighteenth ACM Symposium
on Operating Systems Principles, pages 29-42, New York, NY, USA,
2001. ACM Press.
[8] H. Yu and A. Vahdat. Minimal replication cost for availability. In PODC
-02: Proceedings of the Twenty-first Annual Symposium on Principles
of Distributed Computing, pages 98-107, New York, NY, USA, 2002.
ACM Press.
[9] M. Ripeanu and I. Foster. A decentralized, adaptive replica location
mechanism. In IEEE Computer Society, editor, HPDC-11 02, volume 0,
page 24, Los Alamitos, CA, USA, 23-26 July 2002.
[10] B. S. White, M. Walker, M. Humphrey, and A. S. Grimshaw. Legionfs:
a secure and scalable file system supporting cross-domain highperformance
applications. In Supercomputing -01: Proceedings of the
2001 ACM/IEEE Conference on Supercomputing (CDROM), pages 59-
59, New York, NY, USA, 2001. ACM Press.
[11] G. Belalem and Y. Slimani. A hybrid approach for consistency
management in large scale systems. In IEEE Computer Society, editor,
ICNS 06, volume 0, page 71, Silicon Valley, USA, 16-19 July 2006.
[12] A. Domenici, F. Donno, G. Pucciani, H. Stockinger, and K. Stockinger.
Replica consistency in a data grid. Nuclear Instruments and Methods
in Physics Research A, 534, 2004.
[13] Y. Amir and A. Wool. Optimal availability quorum systems: Theory
and practice. Information Processing Letters, 65(5):223-228, 1998.
[14] S. Goel, H. Sharda, and D. Taniar. Replica synchronisation in grid
databases. Int. J. Web and Grid Services, 1(1):87-112, 2005.
[15] W. H. Bell, G. D. Cameron, L. Capozza, A. P. Millar, K. Stockinger,
and F. Zini. Optorsim : A grid simulator for studying dynamic data
replication strategies. Int. Journal of High Performance Computing
Applications, 17(4):403-416, 2003.
[16] W. Bell, D. Cameron, R. Carvajal-Schiaffino, P. Millar, C.Nicholson,
K. Stockinger, and F. Zini. OptorSim v1.0 In-stallation and User Guide,
February 2004.
[1] I. Foster and C. Kesselman. The Grid: Blueprint for a New Computing
Infrastructure. Morgan Kauffman Publishers Inc., San Francisco, 1999.
[2] K. Ranganathan and I. Foster. Identifying dynamic replication strategies
for a high-performance data grid. In Springer Berlin, editor, Grid:
Second International Workshop, volume 2242, pages 75-86, Denver,CO,
USA, 12 November 2001.
[3] J. Xu, B. Li, and D. J. Lee. Placement problems for transparent
data replication proxy services. IEEE Journal on Selected Areas in
Communications, 7(20):1383-1398, 2002.
[4] H. Yu and A. Vahdat. Design and evaluation of a conit-based continuous
consistency model for replicated services. ACM Transactions on
Computer Systems, 20(3):239-282, Aug 2002.
[5] P. A. Bernstein and N. Goodman. The failure and recovery problem for
replicated databases. In PODC -83: Proceedings of the Second Annual
ACM symposium on Principles of Distributed Computing, pages 114-
122, New York, NY, USA, 1983. ACM Press.
[6] Y. Saito and M. Shapiro. Optimistic replication. ACM Comput. Surv.,
37(1):42-81, 2005.
[7] H. Yu and A. Vahdat. The costs and limits of availability for replicated
services. In SOSP -01: Proceedings of the Eighteenth ACM Symposium
on Operating Systems Principles, pages 29-42, New York, NY, USA,
2001. ACM Press.
[8] H. Yu and A. Vahdat. Minimal replication cost for availability. In PODC
-02: Proceedings of the Twenty-first Annual Symposium on Principles
of Distributed Computing, pages 98-107, New York, NY, USA, 2002.
ACM Press.
[9] M. Ripeanu and I. Foster. A decentralized, adaptive replica location
mechanism. In IEEE Computer Society, editor, HPDC-11 02, volume 0,
page 24, Los Alamitos, CA, USA, 23-26 July 2002.
[10] B. S. White, M. Walker, M. Humphrey, and A. S. Grimshaw. Legionfs:
a secure and scalable file system supporting cross-domain highperformance
applications. In Supercomputing -01: Proceedings of the
2001 ACM/IEEE Conference on Supercomputing (CDROM), pages 59-
59, New York, NY, USA, 2001. ACM Press.
[11] G. Belalem and Y. Slimani. A hybrid approach for consistency
management in large scale systems. In IEEE Computer Society, editor,
ICNS 06, volume 0, page 71, Silicon Valley, USA, 16-19 July 2006.
[12] A. Domenici, F. Donno, G. Pucciani, H. Stockinger, and K. Stockinger.
Replica consistency in a data grid. Nuclear Instruments and Methods
in Physics Research A, 534, 2004.
[13] Y. Amir and A. Wool. Optimal availability quorum systems: Theory
and practice. Information Processing Letters, 65(5):223-228, 1998.
[14] S. Goel, H. Sharda, and D. Taniar. Replica synchronisation in grid
databases. Int. J. Web and Grid Services, 1(1):87-112, 2005.
[15] W. H. Bell, G. D. Cameron, L. Capozza, A. P. Millar, K. Stockinger,
and F. Zini. Optorsim : A grid simulator for studying dynamic data
replication strategies. Int. Journal of High Performance Computing
Applications, 17(4):403-416, 2003.
[16] W. Bell, D. Cameron, R. Carvajal-Schiaffino, P. Millar, C.Nicholson,
K. Stockinger, and F. Zini. OptorSim v1.0 In-stallation and User Guide,
February 2004.
@article{"International Journal of Information, Control and Computer Sciences:57556", author = "Ghalem Belalem and Yahya Slimani", title = "A Consistency Protocol Multi-Layer for Replicas Management in Large Scale Systems", abstract = "Large scale systems such as computational Grid is
a distributed computing infrastructure that can provide globally
available network resources. The evolution of information processing
systems in Data Grid is characterized by a strong decentralization of
data in several fields whose objective is to ensure the availability and
the reliability of the data in the reason to provide a fault tolerance
and scalability, which cannot be possible only with the use of the
techniques of replication. Unfortunately the use of these techniques
has a height cost, because it is necessary to maintain consistency
between the distributed data. Nevertheless, to agree to live with
certain imperfections can improve the performance of the system by
improving competition. In this paper, we propose a multi-layer protocol
combining the pessimistic and optimistic approaches conceived
for the data consistency maintenance in large scale systems. Our
approach is based on a hierarchical representation model with tree
layers, whose objective is with double vocation, because it initially
makes it possible to reduce response times compared to completely
pessimistic approach and it the second time to improve the quality
of service compared to an optimistic approach.", keywords = "Data Grid, replication, consistency, optimistic approach,pessimistic approach.", volume = "2", number = "4", pages = "1189-6", }