Energy Map Construction using Adaptive Alpha Grey Prediction Model in WSNs

Wireless Sensor Networks can be used to monitor the physical phenomenon in such areas where human approach is nearly impossible. Hence the limited power supply is the major constraint of the WSNs due to the use of non-rechargeable batteries in sensor nodes. A lot of researches are going on to reduce the energy consumption of sensor nodes. Energy map can be used with clustering, data dissemination and routing techniques to reduce the power consumption of WSNs. Energy map can also be used to know which part of the network is going to fail in near future. In this paper, Energy map is constructed using the prediction based approach. Adaptive alpha GM(1,1) model is used as the prediction model. GM(1,1) is being used worldwide in many applications for predicting future values of time series using some past values due to its high computational efficiency and accuracy.

Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code

Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.

Simulation of Non-Linear Behavior of Shear Wall under Seismic Loading

The seismic response of steel shear wall system considering nonlinearity effects using finite element method is investigated in this paper. The non-linear finite element analysis has potential as usable and reliable means for analyzing of civil structures with the availability of computer technology. In this research the large displacements and materially nonlinear behavior of shear wall is presented with developing of finite element code. A numerical model based on the finite element method for the seismic analysis of shear wall is presented with developing of finite element code in this research. To develop the finite element code, the standard Galerkin weighted residual formulation is used. Two-dimensional plane stress model and total Lagrangian formulation was carried out to present the shear wall response and the Newton-Raphson method is applied for the solution of nonlinear transient equations. The presented model in this paper can be developed for analysis of civil engineering structures with different material behavior and complicated geometry.

Information Fusion for Identity Verification

In this paper we propose a novel approach for ascertaining human identity based on fusion of profile face and gait biometric cues The identification approach based on feature learning in PCA-LDA subspace, and classification using multivariate Bayesian classifiers allows significant improvement in recognition accuracy for low resolution surveillance video scenarios. The experimental evaluation of the proposed identification scheme on a publicly available database [2] showed that the fusion of face and gait cues in joint PCA-LDA space turns out to be a powerful method for capturing the inherent multimodality in walking gait patterns, and at the same time discriminating the person identity..

A Video-based Algorithm for Moving Objects Detection at Signalized Intersection

Mixed-traffic (e.g., pedestrians, bicycles, and vehicles) data at an intersection is one of the essential factors for intersection design and traffic control. However, some data such as pedestrian volume cannot be directly collected by common detectors (e.g. inductive loop, sonar and microwave sensors). In this paper, a video based detection algorithm is proposed for mixed-traffic data collection at intersections using surveillance cameras. The algorithm is derived from Gaussian Mixture Model (GMM), and uses a mergence time adjustment scheme to improve the traditional algorithm. Real-world video data were selected to test the algorithm. The results show that the proposed algorithm has the faster processing speed and more accuracy than the traditional algorithm. This indicates that the improved algorithm can be applied to detect mixed-traffic at signalized intersection, even when conflicts occur.

A Novel Framework for Abnormal Behaviour Identification and Detection for Wireless Sensor Networks

Despite extensive study on wireless sensor network security, defending internal attacks and finding abnormal behaviour of the sensor are still difficult and unsolved task. The conventional cryptographic technique does not give the robust security or detection process to save the network from internal attacker that cause by abnormal behavior. The insider attacker or abnormally behaved sensor identificationand location detection framework using false massage detection and Time difference of Arrival (TDoA) is presented in this paper. It has been shown that the new framework can efficiently identify and detect the insider attacker location so that the attacker can be reprogrammed or subside from the network to save from internal attack.

e-Service Innovation within Open Innovation Networks

Service innovations are central concerns in fast changing environment. Due to the fitness in customer demands and advances in information technologies (IT) in service management, an expanded conceptualization of e-service innovation is required. Specially, innovation practices have become increasingly more challenging, driving managers to employ a different open innovation model to maintain competitive advantages. At the same time, firms need to interact with external and internal customers in innovative environments, like the open innovation networks, to co-create values. Based on these issues, an important conceptual framework of e-service innovation is developed. This paper aims to examine the contributing factors on e-service innovation and firm performance, including financial and non-financial aspects. The study concludes by showing how e-service innovation will play a significant role in growing the overall values of the firm. The discussion and conclusion will lead to a stronger understanding of e-service innovation and co-creating values with customers within open innovation networks.

Context for Simplicity: A Basis for Context-aware Systems Based on the 3GPP Generic User Profile

The paper focuses on the area of context modeling with respect to the specification of context-aware systems supporting ubiquitous applications. The proposed approach, followed within the SIMPLICITY IST project, uses a high-level system ontology to derive context models for system components which consequently are mapped to the system's physical entities. For the definition of user and device-related context models in particular, the paper suggests a standard-based process consisting of an analysis phase using the Common Information Model (CIM) methodology followed by an implementation phase that defines 3GPP based components. The benefits of this approach are further depicted by preliminary examples of XML grammars defining profiles and components, component instances, coupled with descriptions of respective ubiquitous applications.

The Views of Elementary Mathematics Education Preservice Teachers on Proving

This study has been prepared with the purpose to get the views of senior class Elementary Education Mathematics preservice teachers on proving. Data have been obtained via surveys and interviews carried out with 104 preservice teachers. According to the findings, although preservice teachers have positive views about using proving in mathematics teaching, it is seen that their experiences related to proving is limited to courses and they think proving is a work done only for the exams. Furthermore, they have expressed in the interviews that proving is difficult for them, and because of this reason they prefer memorizing instead of learning.

Estimating Shortest Circuit Path Length Complexity

When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.

A Normalization-based Robust Image Watermarking Scheme Using SVD and DCT

Digital watermarking is one of the techniques for copyright protection. In this paper, a normalization-based robust image watermarking scheme which encompasses singular value decomposition (SVD) and discrete cosine transform (DCT) techniques is proposed. For the proposed scheme, the host image is first normalized to a standard form and divided into non-overlapping image blocks. SVD is applied to each block. By concatenating the first singular values (SV) of adjacent blocks of the normalized image, a SV block is obtained. DCT is then carried out on the SV blocks to produce SVD-DCT blocks. A watermark bit is embedded in the highfrequency band of a SVD-DCT block by imposing a particular relationship between two pseudo-randomly selected DCT coefficients. An adaptive frequency mask is used to adjust local watermark embedding strength. Watermark extraction involves mainly the inverse process. The watermark extracting method is blind and efficient. Experimental results show that the quality degradation of watermarked image caused by the embedded watermark is visually transparent. Results also show that the proposed scheme is robust against various image processing operations and geometric attacks.

Simulating Pathogen Transport with in a Naturally Ventilated Hospital Ward

Understanding how airborne pathogens are transported through hospital wards is essential for determining the infection risk to patients and healthcare workers. This study utilizes Computational Fluid Dynamics (CFD) simulations to explore possible pathogen transport within a six-bed partitioned Nightingalestyle hospital ward. Grid independence of a ward model was addressed using the Grid Convergence Index (GCI) from solutions obtained using three fullystructured grids. Pathogens were simulated using source terms in conjunction with a scalar transport equation and a RANS turbulence model. Errors were found to be less than 4% in the calculation of air velocities but an average of 13% was seen in the scalar field. A parametric study of variations in the pathogen release point illustrated that its distribution is strongly influenced by the local velocity field and the degree of air mixing present.

Noise Depressed in a Micro Stepping Motor

An investigation of noise in a micro stepping motor is considered to study in this article. Because of the trend towards higher precision and more and more small 3C (including Computer, Communication and Consumer Electronics) products, the micro stepping motor is frequently used to drive the micro system or the other 3C products. Unfortunately, noise in a micro stepped motor is too large to accept by the customs. To depress the noise of a micro stepped motor, the dynamic characteristics in this system must be studied. In this article, a Visual Basic (VB) computer program speed controlled micro stepped motor in a digital camera is investigated. Karman KD2300-2S non-contract eddy current displacement sensor, probe microphone, and HP 35670A analyzer are employed to analyze the dynamic characteristics of vibration and noise in a motor. The vibration and noise measurement of different type of bearings and different treatment of coils are compared. The rotating components, bearings, coil, etc. of the motor play the important roles in producing vibration and noise. It is found that the noise will be depressed about 3~4 dB and 6~7 dB, when substitutes the copper bearing with plastic one and coats the motor coil with paraffin wax, respectively.

Correction of Frequent English Writing Errors by Using Coded Indirect Corrective Feedback and Error Treatment

The purposes of this study are 1) to study the frequent English writing errors of students registering the course: Reading and Writing English for Academic Purposes II, and 2) to find out the results of writing error correction by using coded indirect corrective feedback and writing error treatments. Samples include 28 2nd year English Major students, Faculty of Education, Suan Sunandha Rajabhat University. Tool for experimental study includes the lesson plan of the course; Reading and Writing English for Academic Purposes II, and tool for data collection includes 4 writing tests of short texts. The research findings disclose that frequent English writing errors found in this course comprise 7 types of grammatical errors, namely Fragment sentence, Subject-verb agreement, Wrong form of verb tense, Singular or plural noun endings, Run-ons sentence, Wrong form of verb pattern and Lack of parallel structure. Moreover, it is found that the results of writing error correction by using coded indirect corrective feedback and error treatment reveal the overall reduction of the frequent English writing errors and the increase of students’ achievement in the writing of short texts with the significance at .05.

Skew Detection Technique for Binary Document Images based on Hough Transform

Document image processing has become an increasingly important technology in the automation of office documentation tasks. During document scanning, skew is inevitably introduced into the incoming document image. Since the algorithm for layout analysis and character recognition are generally very sensitive to the page skew. Hence, skew detection and correction in document images are the critical steps before layout analysis. In this paper, a novel skew detection method is presented for binary document images. The method considered the some selected characters of the text which may be subjected to thinning and Hough transform to estimate skew angle accurately. Several experiments have been conducted on various types of documents such as documents containing English Documents, Journals, Text-Book, Different Languages and Document with different fonts, Documents with different resolutions, to reveal the robustness of the proposed method. The experimental results revealed that the proposed method is accurate compared to the results of well-known existing methods.

Effect of Moisture Content and Loading Rate on Mechanical Strength of Brown Rice Varieties

The effect of moisture content and loading rate on mechanical strength of 12 brown rice grain varieties was determined. The results showed that the rupture force of brown rice grain decreased by increasing the moisture content and loading rate. The highest rupture force values was obtained at the moisture content of 8% (w.b.) and loading rate of 10 mm/min; while the lowest rupture force corresponded to the moisture content of 14% (w.b.) and loading rate of 15 mm/min. The 12 varieties were divided into three groups, namely local short grain varieties, local long grain varieties and improved long grain varieties. It was observed that the rupture strength of the three groups were statistically different from each other (P

Identified Factors Affecting the Citizen’s Intention to Adopt E-government in Saudi Arabia

This paper discusses E-government, in particular the challenges that face adoption in Saudi Arabia. E-government can be defined based on an existing set of requirements. In this research we define E-government as a matrix of stakeholders: governments to governments, governments to business and governments to citizens, using information and communications technology to deliver and consume services. E-government has been implemented for a considerable time in developed countries. However, E-government services still face many challenges in their implementation and general adoption in many countries including Saudi Arabia. It has been noted that the introduction of E-government is a major challenge facing the government of Saudi Arabia, due to possible concerns raised by its citizens. In addition, the literature review and the discussion identify the influential factors that affect the citizens’ intention to adopt E-government services in Saudi Arabia. Consequently, these factors have been defined and categorized followed by an exploratory study to examine the importance of these factors. Therefore, this research has identified factors that determine if the citizen will adopt E-government services and thereby aiding governments in accessing what is required to increase adoption.

Generating Qualitative Causal Graph using Modeling Constructs of Qualitative Process Theory for Explaining Organic Chemistry Reactions

This paper discusses the causal explanation capability of QRIOM, a tool aimed at supporting learning of organic chemistry reactions. The development of the tool is based on the hybrid use of Qualitative Reasoning (QR) technique and Qualitative Process Theory (QPT) ontology. Our simulation combines symbolic, qualitative description of relations with quantity analysis to generate causal graphs. The pedagogy embedded in the simulator is to both simulate and explain organic reactions. Qualitative reasoning through a causal chain will be presented to explain the overall changes made on the substrate; from initial substrate until the production of final outputs. Several uses of the QPT modeling constructs in supporting behavioral and causal explanation during run-time will also be demonstrated. Explaining organic reactions through causal graph trace can help improve the reasoning ability of learners in that their conceptual understanding of the subject is nurtured.

Face Texture Reconstruction for Illumination Variant Face Recognition

In illumination variant face recognition, existing methods extracting face albedo as light normalized image may lead to loss of extensive facial details, with light template discarded. To improve that, a novel approach for realistic facial texture reconstruction by combining original image and albedo image is proposed. First, light subspaces of different identities are established from the given reference face images; then by projecting the original and albedo image into each light subspace respectively, texture reference images with corresponding lighting are reconstructed and two texture subspaces are formed. According to the projections in texture subspaces, facial texture with normal light can be synthesized. Due to the combination of original image, facial details can be preserved with face albedo. In addition, image partition is applied to improve the synthesization performance. Experiments on Yale B and CMUPIE databases demonstrate that this algorithm outperforms the others both in image representation and in face recognition.