Optimization of the Characteristic Straight Line Method by a “Best Estimate“ of Observed, Normal Orthometric Elevation Differences

In this paper, to optimize the “Characteristic Straight Line Method" which is used in the soil displacement analysis, a “best estimate" of the geodetic leveling observations has been achieved by taking in account the concept of 'Height systems'. This concept has been discussed in detail and consequently the concept of “height". In landslides dynamic analysis, the soil is considered as a mosaic of rigid blocks. The soil displacement has been monitored and analyzed by using the “Characteristic Straight Line Method". Its characteristic components have been defined constructed from a “best estimate" of the topometric observations. In the measurement of elevation differences, we have used the most modern leveling equipment available. Observational procedures have also been designed to provide the most effective method to acquire data. In addition systematic errors which cannot be sufficiently controlled by instrumentation or observational techniques are minimized by applying appropriate corrections to the observed data: the level collimation correction minimizes the error caused by nonhorizontality of the leveling instrument's line of sight for unequal sight lengths, the refraction correction is modeled to minimize the refraction error caused by temperature (density) variation of air strata, the rod temperature correction accounts for variation in the length of the leveling rod' s Invar/LO-VAR® strip which results from temperature changes, the rod scale correction ensures a uniform scale which conforms to the international length standard and the introduction of the concept of the 'Height systems' where all types of height (orthometric, dynamic, normal, gravity correction, and equipotential surface) have been investigated. The “Characteristic Straight Line Method" is slightly more convenient than the “Characteristic Circle Method". It permits to evaluate a displacement of very small magnitude even when the displacement is of an infinitesimal quantity. The inclination of the landslide is given by the inverse of the distance reference point O to the “Characteristic Straight Line". Its direction is given by the bearing of the normal directed from point O to the Characteristic Straight Line (Fig..6). A “best estimate" of the topometric observations was used to measure the elevation of points carefully selected, before and after the deformation. Gross errors have been eliminated by statistical analyses and by comparing the heights within local neighborhoods. The results of a test using an area where very interesting land surface deformation occurs are reported. Monitoring with different options and qualitative comparison of results based on a sufficient number of check points are presented.

In Search of Robustness and Efficiency via l1− and l2− Regularized Optimization for Physiological Motion Compensation

Compensating physiological motion in the context of minimally invasive cardiac surgery has become an attractive issue since it outperforms traditional cardiac procedures offering remarkable benefits. Owing to space restrictions, computer vision techniques have proven to be the most practical and suitable solution. However, the lack of robustness and efficiency of existing methods make physiological motion compensation an open and challenging problem. This work focusses on increasing robustness and efficiency via exploration of the classes of 1−and 2−regularized optimization, emphasizing the use of explicit regularization. Both approaches are based on natural features of the heart using intensity information. Results pointed out the 1−regularized optimization class as the best since it offered the shortest computational cost, the smallest average error and it proved to work even under complex deformations.

Photomechanical Analysis of Wooden Testing Bodies under Flexural Loadings

Application of wood in rural construction is diffused all around the world since remote times. However, its inclusion in structural design deserves strong support from broad knowledge of material properties. The pertinent literature reveals the application of optical methods in determining the complete field displacement on bodies exhibiting regular as well as irregular surfaces. The use of moiré techniques in experimental mechanics consists in analyzing the patterns generated on the body surface before and after deformation. The objective of this research work is to study the qualitative deformation behavior of wooden testing specimens under specific loading situations. The experiment setup follows the literature description of shadow moiré methods. Results indicate strong anisotropy influence of the generated displacement field. Important qualitative as well as quantitative stress and strain distribution were obtained wooden members which are applicable to rural constructions.

Finding Equilibrium in Transport Networks by Simulation and Investigation of Behaviors

The goal of this paper is to find Wardrop equilibrium in transport networks at case of uncertainty situations, where the uncertainty comes from lack of information. We use simulation tool to find the equilibrium, which gives only approximate solution, but this is sufficient for large networks as well. In order to take the uncertainty into account we have developed an interval-based procedure for finding the paths with minimal cost using the Dempster-Shafer theory. Furthermore we have investigated the users- behaviors using game theory approach, because their path choices influence the costs of the other users- paths.

Developing Vision-Based Digital Public Display as an Interactive Media

Interactive public displays give access as an innovative media to promote enhanced communication between people and information. However, digital public displays are subject to a few constraints, such as content presentation. Content presentation needs to be developed to be more interesting to attract people’s attention and motivate people to interact with the display. In this paper, we proposed idea to implement contents with interaction elements for vision-based digital public display. Vision-based techniques are applied as a sensor to detect passers-by and theme contents are suggested to attract their attention for encouraging them to interact with the announcement content. Virtual object, gesture detection and projection installation are applied for attracting attention from passers-by. Preliminary study showed positive feedback of interactive content designing towards the public display. This new trend would be a valuable innovation as delivery of announcement content and information communication through this media is proven to be more engaging.

M2LGP: Mining Multiple Level Gradual Patterns

Gradual patterns have been studied for many years as they contain precious information. They have been integrated in many expert systems and rule-based systems, for instance to reason on knowledge such as “the greater the number of turns, the greater the number of car crashes”. In many cases, this knowledge has been considered as a rule “the greater the number of turns → the greater the number of car crashes” Historically, works have thus been focused on the representation of such rules, studying how implication could be defined, especially fuzzy implication. These rules were defined by experts who were in charge to describe the systems they were working on in order to turn them to operate automatically. More recently, approaches have been proposed in order to mine databases for automatically discovering such knowledge. Several approaches have been studied, the main scientific topics being: how to determine what is an relevant gradual pattern, and how to discover them as efficiently as possible (in terms of both memory and CPU usage). However, in some cases, end-users are not interested in raw level knowledge, and are rather interested in trends. Moreover, it may be the case that no relevant pattern can be discovered at a low level of granularity (e.g. city), whereas some can be discovered at a higher level (e.g. county). In this paper, we thus extend gradual pattern approaches in order to consider multiple level gradual patterns. For this purpose, we consider two aggregation policies, namely horizontal and vertical.

A Methodology to Analyze Technology Convergence: Patent-Citation Based Technology Input-Output Analysis

This research proposes a methodology for patent-citation-based technology input-output analysis by applying the patent information to input-output analysis developed for the dependencies among different industries. For this analysis, a technology relationship matrix and its components, as well as input and technology inducement coefficients, are constructed using patent information. Then, a technology inducement coefficient is calculated by normalizing the degree of citation from certain IPCs to the different IPCs (International patent classification) or to the same IPCs. Finally, we construct a Dependency Structure Matrix (DSM) based on the technology inducement coefficient to suggest a useful application for this methodology.

Database Development and Discrimination Algorithms for Membrane Protein Functions

We have developed a database for membrane protein functions, which has more than 3000 experimental data on functionally important amino acid residues in membrane proteins along with sequence, structure and literature information. Further, we have proposed different methods for identifying membrane proteins based on their functions: (i) discrimination of membrane transport proteins from other globular and membrane proteins and classifying them into channels/pores, electrochemical and active transporters, and (ii) β-signal for the insertion of mitochondrial β-barrel outer membrane proteins and potential targets. Our method showed an accuracy of 82% in discriminating transport proteins and 68% to classify them into three different transporters. In addition, we have identified a motif for targeting β-signal and potential candidates for mitochondrial β-barrel membrane proteins. Our methods can be used as effective tools for genome-wide annotations.

Discovering Complex Regularities by Adaptive Self Organizing Classification

Data mining uses a variety of techniques each of which is useful for some particular task. It is important to have a deep understanding of each technique and be able to perform sophisticated analysis. In this article we describe a tool built to simulate a variation of the Kohonen network to perform unsupervised clustering and support the entire data mining process up to results visualization. A graphical representation helps the user to find out a strategy to optmize classification by adding, moving or delete a neuron in order to change the number of classes. The tool is also able to automatically suggest a strategy for number of classes optimization.The tool is used to classify macroeconomic data that report the most developed countries? import and export. It is possible to classify the countries based on their economic behaviour and use an ad hoc tool to characterize the commercial behaviour of a country in a selected class from the analysis of positive and negative features that contribute to classes formation.

Explorations in the Role of Emotion in Moral Judgment

Recent theorizations on the cognitive process of moral judgment have focused on the role of intuitions and emotions, marking a departure from previous emphasis on conscious, step-by-step reasoning. My study investigated how being in a disgusted mood state affects moral judgment. Participants were induced to enter a disgusted mood state through listening to disgusting sounds and reading disgusting descriptions. Results shows that they, when compared to control who have not been induced to feel disgust, are more likely to endorse actions that are emotionally aversive but maximizes utilitarian return The result is analyzed using the 'emotion-as-information' approach to decision making. The result is consistent with the view that emotions play an important role in determining moral judgment.

A Delay-Tolerant Distributed Query Processing Architecture for Mobile Environment

The intermittent connectivity modifies the “always on" network assumption made by all the distributed query processing systems. In modern- day systems, the absence of network connectivity is considered as a fault. Since the last upload, it might not be feasible to transmit all the data accumulated right away over the available connection. It is possible that vital information may be delayed excessively when the less important information takes place of the vital information. Owing to the restricted and uneven bandwidth, it is vital that the mobile nodes make the most advantageous use of the connectivity when it arrives. Hence, in order to select the data that needs to be transmitted first, some sort of data prioritization is essential. A continuous query processing system for intermittently connected mobile networks that comprises of a delaytolerant continuous query processor distributed across the mobile hosts has been proposed in this paper. In addition, a mechanism for prioritizing query results has been designed that guarantees enhanced accuracy and reduced delay. It is illustrated that our architecture reduces the client power consumption, increases query efficiency by the extensive simulation results.

Oscillation Theorems for Second-order Nonlinear Neutral Dynamic Equations with Variable Delays and Damping

In this paper, we study the oscillation of a class of second-order nonlinear neutral damped variable delay dynamic equations on time scales. By using a generalized Riccati transformation technique, we obtain some sufficient conditions for the oscillation of the equations. The results of this paper improve and extend some known results. We also illustrate our main results with some examples.

Online Signature Verification Using Angular Transformation for e-Commerce Services

The rapid growth of e-Commerce services is significantly observed in the past decade. However, the method to verify the authenticated users still widely depends on numeric approaches. A new search on other verification methods suitable for online e-Commerce is an interesting issue. In this paper, a new online signature-verification method using angular transformation is presented. Delay shifts existing in online signatures are estimated by the estimation method relying on angle representation. In the proposed signature-verification algorithm, all components of input signature are extracted by considering the discontinuous break points on the stream of angular values. Then the estimated delay shift is captured by comparing with the selected reference signature and the error matching can be computed as a main feature used for verifying process. The threshold offsets are calculated by two types of error characteristics of the signature verification problem, False Rejection Rate (FRR) and False Acceptance Rate (FAR). The level of these two error rates depends on the decision threshold chosen whose value is such as to realize the Equal Error Rate (EER; FAR = FRR). The experimental results show that through the simple programming, employed on Internet for demonstrating e-Commerce services, the proposed method can provide 95.39% correct verifications and 7% better than DP matching based signature-verification method. In addition, the signature verification with extracting components provides more reliable results than using a whole decision making.

Performance Evaluation of Wavelet Based Coders on Brain MRI Volumetric Medical Datasets for Storage and Wireless Transmission

In this paper, we evaluate the performance of some wavelet based coding algorithms such as 3D QT-L, 3D SPIHT and JPEG2K. In the first step we achieve an objective comparison between three coders, namely 3D SPIHT, 3D QT-L and JPEG2K. For this purpose, eight MRI head scan test sets of 256 x 256x124 voxels have been used. Results show superior performance of 3D SPIHT algorithm, whereas 3D QT-L outperforms JPEG2K. The second step consists of evaluating the robustness of 3D SPIHT and JPEG2K coding algorithm over wireless transmission. Compressed dataset images are then transmitted over AWGN wireless channel or over Rayleigh wireless channel. Results show the superiority of JPEG2K over these two models. In fact, it has been deduced that JPEG2K is more robust regarding coding errors. Thus we may conclude the necessity of using corrector codes in order to protect the transmitted medical information.

The New Method of Concealed Data Aggregation in Wireless Sensor: A Case Study

Wireless sensor networks (WSN) consists of many sensor nodes that are placed on unattended environments such as military sites in order to collect important information. Implementing a secure protocol that can prevent forwarding forged data and modifying content of aggregated data and has low delay and overhead of communication, computing and storage is very important. This paper presents a new protocol for concealed data aggregation (CDA). In this protocol, the network is divided to virtual cells, nodes within each cell produce a shared key to send and receive of concealed data with each other. Considering to data aggregation in each cell is locally and implementing a secure authentication mechanism, data aggregation delay is very low and producing false data in the network by malicious nodes is not possible. To evaluate the performance of our proposed protocol, we have presented computational models that show the performance and low overhead in our protocol.

Hubs as Catalysts for Geospatial Communication in Kinship Networks

Earlier studies in kinship networks have primarily focused on observing the social relationships existing between family relatives. In this study, we pre-identified hubs in the network to investigate if they could play a catalyst role in the transfer of physical information. We conducted a case study of a ceremony performed in one of the families of a small Hindu community – the Uttar Rarhi Kayasthas. Individuals (n = 168) who resided in 11 geographically dispersed regions were contacted through our hub-based representation. We found that using this representation, over 98% of the individuals were successfully contacted within the stipulated period. The network also demonstrated a small-world property, with an average geodesic distance of 3.56.

Identifying Blind Spots in a Stereo View for Early Decisions in SI for Fusion based DMVC

In DMVC, we have more than one options of sources available for construction of side information. The newer techniques make use of both the techniques simultaneously by constructing a bitmask that determines the source of every block or pixel of the side information. A lot of computation is done to determine each bit in the bitmask. In this paper, we have tried to define areas that can only be well predicted by temporal interpolation and not by multiview interpolation or synthesis. We predict that all such areas that are not covered by two cameras cannot be appropriately predicted by multiview synthesis and if we can identify such areas in the first place, we don-t need to go through the script of computations for all the pixels that lie in those areas. Moreover, this paper also defines a technique based on KLT to mark the above mentioned areas before any other processing is done on the side view.

Fast Intra Prediction Algorithm for H.264/AVC Based on Quadratic and Gradient Model

The H.264/AVC standard uses an intra prediction, 9 directional modes for 4x4 luma blocks and 8x8 luma blocks, 4 directional modes for 16x16 macroblock and 8x8 chroma blocks, respectively. It means that, for a macroblock, it has to perform 736 different RDO calculation before a best RDO modes is determined. With this Multiple intra-mode prediction, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards, but computational complexity is increased significantly. This paper presents a fast intra prediction algorithm for H.264/AVC intra prediction based a characteristic of homogeneity information. In this study, the gradient prediction method used to predict the homogeneous area and the quadratic prediction function used to predict the nonhomogeneous area. Based on the correlation between the homogeneity and block size, the smaller block is predicted by gradient prediction and quadratic prediction, so the bigger block is predicted by gradient prediction. Experimental results are presented to show that the proposed method reduce the complexity by up to 76.07% maintaining the similar PSNR quality with about 1.94%bit rate increase in average.

Framework of Malaysian Knowledge Society: Results from Dual Data Approach

This paper outlines the research conducted to propose na framework of 'Knowledge Society' (KS) in the Malaysian context. It is important to highlight that the emergence of KS is a result of the rapid growth in knowledge and information. However, the discussion of KS should not only be limited to the importance of knowledge, but a holistic KS is also determined by other imperative dimensions. This article discusses the results of a study conducted previously in Malaysia in order to identify the essential dimensions of KS, and consequently propose a KS framework in the Malaysian context. Two methods were employed, namely the Delphi technique and semi-structured interviews. The modified Delphi involved five rounds with ten experts, while the interviews were conducted with two prominent figures in Malaysia. The results support the proposed framework which contains seven major dimensions in order for Malaysia to become a KS in the future. The dimensions which are crucial for a holistic Malaysian KS are human capital, spirituality, economy, social, institutional, sustainability, and driven by the ICT.

Analysis of Temperature Change under Global Warming Impact using Empirical Mode Decomposition

The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.