Hotel Guest's Liability for Non-Payment of Hotel Services in Comparative Law

The subject of the paper is comparative analysis of the hotel guest-s contractual liability for breaching the obligation for non-payment of hotel services in the hotel-keeper-s contract. The paper is methodologically conceived of six chapters (1. introduction, 2. comparative law sources of the hotel-keeper-s contract, 3. the guest-s obligation for payment of hotel services, 4. hotel guest's liability for non-payment, 5. the hotel-keeper-s rights due to nonpayment and 6. conclusion), which analyzes the guest-s liability for non-payment of hotel services through the international law, European law, euro-continental national laws (France, Germany, Italy, Croatia) and Anglo-American national laws (UK, USA). The paper-s results are the synthesis of answers to the set hypothesis and comparative review of hotel guest-s contractual liability for nonpayment of hotel services provided. In conclusion, it is necessary to adopt an international convention on the hotel-keeper-s contract, which would unify the institute of the hotel guest-s contractual liability for non-payment of hotel services at the international level.

Replicating Data Objects in Large-scale Distributed Computing Systems using Extended Vickrey Auction

This paper proposes a novel game theoretical technique to address the problem of data object replication in largescale distributed computing systems. The proposed technique draws inspiration from computational economic theory and employs the extended Vickrey auction. Specifically, players in a non-cooperative environment compete for server-side scarce memory space to replicate data objects so as to minimize the total network object transfer cost, while maintaining object concurrency. Optimization of such a cost in turn leads to load balancing, fault-tolerance and reduced user access time. The method is experimentally evaluated against four well-known techniques from the literature: branch and bound, greedy, bin-packing and genetic algorithms. The experimental results reveal that the proposed approach outperforms the four techniques in both the execution time and solution quality.

A Multiple-Objective Environmental Rationalization and Optimization for Material Substitution in the Production of Stone-Washed Jeans- Garments

As the Textile Industry is the second largest industry in Egypt and as small and medium-sized enterprises (SMEs) make up a great portion of this industry therein it is essential to apply the concept of Cleaner Production for the purpose of reducing pollution. In order to achieve this goal, a case study concerned with ecofriendly stone-washing of jeans-garments was investigated. A raw material-substitution option was adopted whereby the toxic potassium permanganate and sodium sulfide were replaced by the environmentally compatible hydrogen peroxide and glucose respectively where the concentrations of both replaced chemicals together with the operating time were optimized. In addition, a process-rationalization option involving four additional processes was investigated. By means of criteria such as product quality, effluent analysis, mass and heat balance; and cost analysis with the aid of a statistical model, a process optimization treatment revealed that the superior process optima were 50%, 0.15% and 50min for H2O2 concentration, glucose concentration and time, respectively. With these values the superior process ought to reduce the annual cost by about EGP 105 relative to the currently used conventional method.

The Suitability of GPS Receivers Update Rates for Navigation Applications

Navigation is the processes of monitoring and controlling the movement of an object from one place to another. Currently, Global Positioning System (GPS) is the main navigation system used all over the world for navigation applications. GPS receiver receives signals from at least three satellites to locate and display itself. Displayed positioning information is updated continuously. Update rate is the number of times per second that a display is illuminated. The speed of update is governed by receiver update rate. A higher update rate decreases display lag time and improves distance measurements and tracking especially when moving on a curvy route. The majority of GPS receivers used nowadays are updated every second continuously. This period is considered reasonable for some applications while it is long relatively for high speed applications. In this paper, the suitability and feasibility of GPS receiver with different update rates will be evaluated for various applications according to the level of speed and update rate needed for particular applications.

Voltage-Controllable Liquid Crystals Lens

This study investigates a voltage-controllable liquid crystals lens with a Fresnel zone electrode. When applying a proper voltage on the liquid crystal cell, a Fresnel-zone-distributed electric field is induced to direct liquid crystals aligned in a concentric structure. Owing to the concentrically aligned liquid crystals, a Fresnel lens is formed. We probe the Fresnel liquid crystal lens using a polarized incident beam with a wavelength of 632.8 nm, finding that the diffraction efficiency depends on the applying voltage. A remarkable diffraction efficiency of ~39.5 % is measured at the voltage of 0.9V. Additionally, a dual focus lens is fabricated by attaching a plane-convex lens to the Fresnel liquid crystals cell. The Fresnel LC lens and the dual focus lens may be applied for DVD/CD pick-up head, confocal microscopy system, or electrically-controlling optical systems.

A New Effective Local Search Heuristic for the Maximum Clique Problem

An edge based local search algorithm, called ELS, is proposed for the maximum clique problem (MCP), a well-known combinatorial optimization problem. ELS is a two phased local search method effectively £nds the near optimal solutions for the MCP. A parameter ’support’ of vertices de£ned in the ELS greatly reduces the more number of random selections among vertices and also the number of iterations and running times. Computational results on BHOSLIB and DIMACS benchmark graphs indicate that ELS is capable of achieving state-of-the-art-performance for the maximum clique with reasonable average running times.

On Constructing Approximate Convex Hull

The algorithms of convex hull have been extensively studied in literature, principally because of their wide range of applications in different areas. This article presents an efficient algorithm to construct approximate convex hull from a set of n points in the plane in O(n + k) time, where k is the approximation error control parameter. The proposed algorithm is suitable for applications preferred to reduce the computation time in exchange of accuracy level such as animation and interaction in computer graphics where rapid and real-time graphics rendering is indispensable.

Fuzzy Numbers and MCDM Methods for Portfolio Optimization

A new deployment of the multiple criteria decision making (MCDM) techniques: the Simple Additive Weighting (SAW), and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for portfolio allocation, is demonstrated in this paper. Rather than exclusive reference to mean and variance as in the traditional mean-variance method, the criteria used in this demonstration are the first four moments of the portfolio distribution. Each asset is evaluated based on its marginal impacts to portfolio higher moments that are characterized by trapezoidal fuzzy numbers. Then centroid-based defuzzification is applied to convert fuzzy numbers to the crisp numbers by which SAW and TOPSIS can be deployed. Experimental results suggest the similar efficiency of these MCDM approaches to selecting dominant assets for an optimal portfolio under higher moments. The proposed approaches allow investors flexibly adjust their risk preferences regarding higher moments via different schemes adapting to various (from conservative to risky) kinds of investors. The other significant advantage is that, compared to the mean-variance analysis, the portfolio weights obtained by SAW and TOPSIS are consistently well-diversified.

TOSOM: A Topic-Oriented Self-Organizing Map for Text Organization

The self-organizing map (SOM) model is a well-known neural network model with wide spread of applications. The main characteristics of SOM are two-fold, namely dimension reduction and topology preservation. Using SOM, a high-dimensional data space will be mapped to some low-dimensional space. Meanwhile, the topological relations among data will be preserved. With such characteristics, the SOM was usually applied on data clustering and visualization tasks. However, the SOM has main disadvantage of the need to know the number and structure of neurons prior to training, which are difficult to be determined. Several schemes have been proposed to tackle such deficiency. Examples are growing/expandable SOM, hierarchical SOM, and growing hierarchical SOM. These schemes could dynamically expand the map, even generate hierarchical maps, during training. Encouraging results were reported. Basically, these schemes adapt the size and structure of the map according to the distribution of training data. That is, they are data-driven or dataoriented SOM schemes. In this work, a topic-oriented SOM scheme which is suitable for document clustering and organization will be developed. The proposed SOM will automatically adapt the number as well as the structure of the map according to identified topics. Unlike other data-oriented SOMs, our approach expands the map and generates the hierarchies both according to the topics and their characteristics of the neurons. The preliminary experiments give promising result and demonstrate the plausibility of the method.

PIIN Suppression Using Random Diagonal Code for Spectral Amplitude Coding Optical CDMA System

A new code for spectral-amplitude coding optical code-division multiple-access system is proposed called Random diagonal (RD) code. This code is constructed using code segment and data segment. One of the important properties of this code is that the cross correlation at data segment is always zero, which means that Phase Intensity Induced Noise (PIIN) is reduced. For the performance analysis, the effects of phase-induced intensity noise, shot noise, and thermal noise are considered simultaneously. Bit-error rate (BER) performance is compared with Hadamard and Modified Frequency Hopping (MFH) codes. It is shown that the system using this new code matrices not only suppress PIIN, but also allows larger number of active users compare with other codes. Simulation results shown that using point to point transmission with three encoded channels, RD code has better BER performance than other codes, also its found that at 0 dbm PIIN noise are 10-10 and 10-11 for RD and MFH respectively.

Mining and Visual Management of XML-Based Image Collections

This article describes Uruk, the virtual museum of Iraq that we developed for visual exploration and retrieval of image collections. The system largely exploits the loosely-structured hierarchy of XML documents that provides a useful representation method to store semi-structured or unstructured data, which does not easily fit into existing database. The system offers users the capability to mine and manage the XML-based image collections through a web-based Graphical User Interface (GUI). Typically, at an interactive session with the system, the user can browse a visual structural summary of the XML database in order to select interesting elements. Using this intermediate result, queries combining structure and textual references can be composed and presented to the system. After query evaluation, the full set of answers is presented in a visual and structured way.

Characterizations of Star-Shaped, L-Convex, and Convex Polygons

A chord of a simple polygon P is a line segment [xy] that intersects the boundary of P only at both endpoints x and y. A chord of P is called an interior chord provided the interior of [xy] lies in the interior of P. P is weakly visible from [xy] if for every point v in P there exists a point w in [xy] such that [vw] lies in P. In this paper star-shaped, L-convex, and convex polygons are characterized in terms of weak visibility properties from internal chords and starshaped subsets of P. A new Krasnoselskii-type characterization of isothetic star-shaped polygons is also presented.

Haar Wavelet Method for Solving Fitz Hugh-Nagumo Equation

In this paper, we develop an accurate and efficient Haar wavelet method for well-known FitzHugh-Nagumo equation. The proposed scheme can be used to a wide class of nonlinear reaction-diffusion equations. The power of this manageable method is confirmed. Moreover the use of Haar wavelets is found to be accurate, simple, fast, flexible, convenient, small computation costs and computationally attractive.

Characteristics Analysis of Voltage Sag and Voltage Swell in Multi-Grounded Four-Wire Power Distribution Systems

In North America, Most power distribution systems employ a four-wire multi-grounded neutral (MGN) design. This paper has explained the inherent characteristics of multi-grounded three-phase four-wire distribution systems under unbalanced situations. As a result, the mechanism of voltage swell and voltage sag in MGN feeders becomes difficult to understand. The simulation tool that has been used in this paper is MATLAB under Windows software. In this paper the equivalent model of a full-scale multigrounded distribution system implemented by MATLAB is introduced. The results are expected to help utility engineers to understand the impact of MGN on distribution system operations.

The Spiral_OWL Model – Towards Spiral Knowledge Engineering

The Spiral development model has been used successfully in many commercial systems and in a good number of defense systems. This is due to the fact that cost-effective incremental commitment of funds, via an analogy of the spiral model to stud poker and also can be used to develop hardware or integrate software, hardware, and systems. To support adaptive, semantic collaboration between domain experts and knowledge engineers, a new knowledge engineering process, called Spiral_OWL is proposed. This model is based on the idea of iterative refinement, annotation and structuring of knowledge base. The Spiral_OWL model is generated base on spiral model and knowledge engineering methodology. A central paradigm for Spiral_OWL model is the concentration on risk-driven determination of knowledge engineering process. The collaboration aspect comes into play during knowledge acquisition and knowledge validation phase. Design rationales for the Spiral_OWL model are to be easy-to-implement, well-organized, and iterative development cycle as an expanding spiral.

A Serializability Condition for Multi-step Transactions Accessing Ordered Data

In mobile environments, unspecified numbers of transactions arrive in continuous streams. To prove correctness of their concurrent execution a method of modelling an infinite number of transactions is needed. Standard database techniques model fixed finite schedules of transactions. Lately, techniques based on temporal logic have been proposed as suitable for modelling infinite schedules. The drawback of these techniques is that proving the basic serializability correctness condition is impractical, as encoding (the absence of) conflict cyclicity within large sets of transactions results in prohibitively large temporal logic formulae. In this paper, we show that, under certain common assumptions on the graph structure of data items accessed by the transactions, conflict cyclicity need only be checked within all possible pairs of transactions. This results in formulae of considerably reduced size in any temporal-logic-based approach to proving serializability, and scales to arbitrary numbers of transactions.

Shape Restoration of the Left Ventricle

This paper describes an automatic algorithm to restore the shape of three-dimensional (3D) left ventricle (LV) models created from magnetic resonance imaging (MRI) data using a geometry-driven optimization approach. Our basic premise is to restore the LV shape such that the LV epicardial surface is smooth after the restoration. A geometrical measure known as the Minimum Principle Curvature (κ2) is used to assess the smoothness of the LV. This measure is used to construct the objective function of a two-step optimization process. The objective of the optimization is to achieve a smooth epicardial shape by iterative in-plane translation of the MRI slices. Quantitatively, this yields a minimum sum in terms of the magnitude of κ 2, when κ2 is negative. A limited memory quasi-Newton algorithm, L-BFGS-B, is used to solve the optimization problem. We tested our algorithm on an in vitro theoretical LV model and 10 in vivo patient-specific models which contain significant motion artifacts. The results show that our method is able to automatically restore the shape of LV models back to smoothness without altering the general shape of the model. The magnitudes of in-plane translations are also consistent with existing registration techniques and experimental findings.

Revisiting the Concept of Risk Analysis within the Context of Geospatial Database Design: A Collaborative Framework

The aim of this research is to design a collaborative framework that integrates risk analysis activities into the geospatial database design (GDD) process. Risk analysis is rarely undertaken iteratively as part of the present GDD methods in conformance to requirement engineering (RE) guidelines and risk standards. Accordingly, when risk analysis is performed during the GDD, some foreseeable risks may be overlooked and not reach the output specifications especially when user intentions are not systematically collected. This may lead to ill-defined requirements and ultimately in higher risks of geospatial data misuse. The adopted approach consists of 1) reviewing risk analysis process within the scope of RE and GDD, 2) analyzing the challenges of risk analysis within the context of GDD, and 3) presenting the components of a risk-based collaborative framework that improves the collection of the intended/forbidden usages of the data and helps geo-IT experts to discover implicit requirements and risks.

Supercompression for Full-HD and 4k-3D (8k)Digital TV Systems

In this work, we developed the concept of supercompression, i.e., compression above the compression standard used. In this context, both compression rates are multiplied. In fact, supercompression is based on super-resolution. That is to say, supercompression is a data compression technique that superpose spatial image compression on top of bit-per-pixel compression to achieve very high compression ratios. If the compression ratio is very high, then we use a convolutive mask inside decoder that restores the edges, eliminating the blur. Finally, both, the encoder and the complete decoder are implemented on General-Purpose computation on Graphics Processing Units (GPGPU) cards. Specifically, the mentio-ned mask is coded inside texture memory of a GPGPU.

Heat Exchanger Design

This paper is intended to assist anyone with some general technical experience, but perhaps limited specific knowledge of heat transfer equipment. A characteristic of heat exchanger design is the procedure of specifying a design, heat transfer area and pressure drops and checking whether the assumed design satisfies all requirements or not. The purpose of this paper is how to design the oil cooler (heat exchanger) especially for shell-and-tube heat exchanger which is the majority type of liquid-to-liquid heat exchanger. General design considerations and design procedure are also illustrated in this paper and a flow diagram is provided as an aid of design procedure. In design calculation, the MatLAB and AutoCAD software are used. Fundamental heat transfer concepts and complex relationships involved in such exchanger are also presented in this paper. The primary aim of this design is to obtain a high heat transfer rate without exceeding the allowable pressure drop. This computer program is highly useful to design the shell-and-tube type heat exchanger and to modify existing deign.