Battery Operation Time Enhancement Based On Alternating Battery Cell Discharge

This paper proposes an alternating discharge method of multiple battery cells to extend battery operation time. In the proposed method, two battery cells are periodically connected in turn to a mobile device and only one cell supply power while the other rests. Battery operation time of the connecting cell decreases due to rate-capacity effect, while that of the resting cell increases due to recovery effect. These two effects conflict each other, but recovery effect is generally larger than rate-capacity effect and battery lifetime is extended. It was found from the result that battery operation time increase about 7% by using alternating battery cell discharge.

Recommender Systems Using Ensemble Techniques

This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.

Extension of the Client-Centric Approach under Small Buffer Space

Periodic broadcast is a cost-effective solution for large-scale distribution of popular videos because this approach guarantees constant worst service latency, regardless of the number of video requests. An essential periodic broadcast method is the client-centric approach (CCA), which allows clients to use smaller receiving bandwidth to download broadcast data. An enhanced version, namely CCA++, was proposed to yield a shorter waiting time. This work further improves CCA++ in reducing client buffer requirements. The new scheme decreases the buffer requirements by as much as 52% when compared to CCA++. This study also provides an analytical evaluation to demonstrate the performance advantage, as compared with particular schemes.

A General Mandatory Access Control Framework in Distributed Environments

In this paper, we propose a general mandatory access framework for distributed systems. The framework can be applied into multiple operating systems and can handle multiple stakeholders. Despite considerable advancements in the area of mandatory access control, a certain approach to enforcing mandatory access control can only be applied in a specific operating system. Other than PC market in which windows captures the overwhelming shares, there are a number of popular operating systems in the emerging smart phone environment, i.e. Android, Windows mobile, Symbian, RIM. It should be noted that more and more stakeholders are involved in smartphone software, such as devices owners, service providers and application providers. Our framework includes three parts—local decision layer, the middle layer and the remote decision layer. The middle layer takes charge of managing security contexts, OS API, operations and policy combination. The design of the remote decision layer doesn’t depend on certain operating systems because of the middle layer’s existence. We implement the framework in windows, linux and other popular embedded systems.

Wideband Tunable RF Filters for Channel Selection in Crowded Spectral Bands

It is very effective way to utilize a very wide tunable filter in co-existing multi-standards wireless communications environment. Especially, as the long term evolution (LTE) communication era has come, the multi-band coverage is one of the important features required for the RF components. In this paper, we present the frequency conversion technique, and so generate two types of RF filters which are specially designed for the superb tunable ability to support multiple wireless communication standards. With the help of a complex mixing structure, the inherent image signal is suppressed. The RF band-pass filter (BPF) and notch filter achieve 1.8dB and 1.6dB insertion losses and 18 dB and 17 dB attenuations, respectively. The quality factor show greater than 30.

A New Floating Point Implementation of Base 2 Logarithm

Logarithms reduce products to sums and powers to products; they play an important role in signal processing, communication and information theory. They are primarily used for hardware calculations, handling multiplications, divisions, powers, and roots effectively. There are three commonly used bases for logarithms; the logarithm with base-10 is called the common logarithm, the natural logarithm with base-e and the binary logarithm with base-2. This paper demonstrates different methods of calculation for log2 showing the complexity of each and finds out the most accurate and efficient besides giving insights to their hardware design. We present a new method called Floor Shift for fast calculation of log2, and then we combine this algorithm with Taylor series to improve the accuracy of the output, we illustrate that by using two examples. We finally compare the algorithms and conclude with our remarks.

Parallel Priority Region Approach to Detect Background

Background detection is essential in video analyses; optimization is often needed in order to achieve real time calculation. Information gathered by dual cameras placed in the front and rear part of an Autonomous Vehicle (AV) is integrated for background detection. In this paper, real time calculation is achieved on the proposed technique by using Priority Regions (PR) and Parallel Processing together where each frame is divided into regions then and each region process is processed in parallel. PR division depends upon driver view limitations. A background detection system is built on the Temporal Difference (TD) and Gaussian Filtering (GF). Temporal Difference and Gaussian Filtering with multi threshold and sigma (weight) value are be based on PR characteristics. The experiment result is prepared on real scene. Comparison of the speed and accuracy with traditional background detection techniques, the effectiveness of PR and parallel processing are also discussed in this paper.

Multimodal Biometric Authentication Using Choquet Integral and Genetic Algorithm

The Choquet integral is a tool for the information fusion that is very effective in the case where fuzzy measures associated with it are well chosen. In this paper, we propose a new approach for calculating fuzzy measures associated with the Choquet integral in a context of data fusion in multimodal biometrics. The proposed approach is based on genetic algorithms. It has been validated in two databases: the first base is relative to synthetic scores and the second one is biometrically relating to the face, fingerprint and palmprint. The results achieved attest the robustness of the proposed approach.

Adaptive Score Normalization: A Novel Approach for Multimodal Biometric Systems

Multimodal biometric systems integrate the data presented by multiple biometric sources, hence offering a better performance than the systems based on a single biometric modality. Although the coupling of biometric systems can be done at different levels, the fusion at the scores level is the most common since it has been proven effective than the rest of the fusion levels. However, the scores from different modalities are generally heterogeneous. A step of normalizing the scores is needed to transform these scores into a common domain before combining them. In this paper, we study the performance of several normalization techniques with various fusion methods in a context relating to the merger of three unimodal systems based on the face, the palmprint and the fingerprint. We also propose a new adaptive normalization method that takes into account the distribution of client scores and impostor scores. Experiments conducted on a database of 100 people show that the performances of a multimodal system depend on the choice of the normalization method and the fusion technique. The proposed normalization method has given the best results.

Fung’s Model Constants for Intracranial Blood Vessel of Human Using Biaxial Tensile Test Results

Mechanical properties of cerebral arteries are, due to their relationship with cerebrovascular diseases, of clinical worth. To acquire these properties, eight samples were obtained from middle cerebral arteries of human cadavers, whose death were not due to injuries or diseases of cerebral vessels, and tested within twelve hours after resection, by a precise biaxial tensile test device specially developed for the present study considering the dimensions, sensitivity and anisotropic nature of samples. The resulting stress-stretch curve was plotted and subsequently fitted to a hyperelastic three-parameter Fung model. It was found that the arteries were noticeably stiffer in circumferential than in axial direction. It was also demonstrated that the use of multi-parameter hyperelastic constitutive models is useful for mathematical description of behavior of cerebral vessel tissue. The reported material properties are a proper reference for numerical modeling of cerebral arteries and computational analysis of healthy or diseased intracranial arteries.

A Deterministic Dynamic Programming Approach for Optimization Problem with Quadratic Objective Function and Linear Constraints

This paper presents the novel deterministic dynamic programming approach for solving optimization problem with quadratic objective function with linear equality and inequality constraints. The proposed method employs backward recursion in which computations proceeds from last stage to first stage in a multi-stage decision problem. A generalized recursive equation which gives the exact solution of an optimization problem is derived in this paper. The method is purely analytical and avoids the usage of initial solution. The feasibility of the proposed method is demonstrated with a practical example. The numerical results show that the proposed method provides global optimum solution with negligible computation time.

Probabilistic Bhattacharya Based Active Contour Model in Structure Tensor Space

Object identification and segmentation application requires extraction of object in foreground from the background. In this paper the Bhattacharya distance based probabilistic approach is utilized with an active contour model (ACM) to segment an object from the background. In the proposed approach, the Bhattacharya histogram is calculated on non-linear structure tensor space. Based on the histogram, new formulation of active contour model is proposed to segment images. The results are tested on both color and gray images from the Berkeley image database. The experimental results show that the proposed model is applicable to both color and gray images as well as both texture images and natural images. Again in comparing to the Bhattacharya based ACM in ICA space, the proposed model is able to segment multiple object too.

Beneficiation of Pyrolitic Carbon Black

This research investigated treatment of crude carbon black produced from pyrolysis of waste tyres in order to evaluate its quality and possible industrial applications. A representative sample of crude carbon black was dry screened to determine the initial particle size distribution. This was followed by pulverizing the crude carbon black and leaching in hot concentrated sulphuric acid for the removal of heavy metals and other contaminants. Analysis of the refined carbon black showed a significant improvement of the product quality compared to crude carbon black. It was discovered that refined carbon black can be further classified into multiple high value products for various industrial applications such as filler, paint pigment, activated carbon and fuel briquettes.

A Statistical Prediction of Likely Distress in Nigeria Banking Sector Using a Neural Network Approach

One of the most significant threats to the economy of a nation is the bankruptcy of its banks. This study evaluates the susceptibility of Nigerian banks to failure with a view to identifying ratios and financial data that are sensitive to solvency of the bank. Further, a predictive model is generated to guide all stakeholders in the industry. Thirty quoted banks that had published Annual Reports for the year preceding the consolidation i.e. year 2004 were selected. They were examined for distress using the Multilayer Perceptron Neural Network Analysis. The model was used to analyze further reforms by the Central Bank of Nigeria using published Annual Reports of twenty quoted banks for the year 2008 and 2011. The model can thus be used for future prediction of failure in the Nigerian banking system.

A Study of Priority Evaluation and Resource Allocation for Revitalization of Cultural Heritages in the Urban Development

Proper maintenance and preservation of significant cultural heritages or historic buildings is necessary. It can not only enhance environmental benefits and a sense of community, but also preserve a city's history and people’s memory. It allows the next generation to be able to get a glimpse of our past, and achieve the goal of sustainable preserved cultural assets. However, the management of maintenance work has not been appropriate for many designated heritages or historic buildings so far. The planning and implementation of the reuse has yet to have a breakthrough specification. It leads the heritages to a mere formality of being “reserved”, instead of the real meaning of “conservation”. For the restoration and preservation of cultural heritages study issues, it is very important due to the consideration of historical significance, symbolism, and economic benefits effects. However, the decision makers such as the officials from public sector they often encounter which heritage should be prioritized to be restored first under the available limited budgets. Only very few techniques are available today to determine the appropriately restoration priorities for the diverse historical heritages, perhaps because of a lack of systematized decision-making aids been proposed before. In the past, the discussions of management and maintenance towards cultural assets were limited to the selection of reuse alternatives instead of the allocation of resources. In view of this, this research will adopt some integrated research methods to solve the existing problems that decision-makers might encounter when allocating resources in the management and maintenance of heritages and historic buildings. The purpose of this study is to develop a sustainable decision making model for local governments to resolve these problems. We propose an alternative decision support model to prioritize restoration needs within the limited budgets. The model is constructed based on fuzzy Delphi, fuzzy analysis network process (FANP) and goal programming (GP) methods. In order to avoid misallocate resources; this research proposes a precise procedure that can take multi-stakeholders views, limited costs and resources into consideration. Also, the combination of many factors and goals has been taken into account to find the highest priority and feasible solution results. To illustrate the approach we propose in this research, seven cultural heritages in Taipei city as one example has been used as an empirical study, and the results are in depth analyzed to explain the application of our proposed approach.

On the Computation of a Common n-finger Robotic Grasp for a Set of Objects

Industrial robotic arms utilize multiple end-effectors, each for a specific part and for a specific task. We propose a novel algorithm which will define a single end-effector’s configuration able to grasp a given set of objects with different geometries. The algorithm will have great benefit in production lines allowing a single robot to grasp various parts. Hence, reducing the number of endeffectors needed. Moreover, the algorithm will reduce end-effector design and manufacturing time and final product cost. The algorithm searches for a common grasp over the set of objects. The search algorithm maps all possible grasps for each object which satisfy a quality criterion and takes into account possible external wrenches (forces and torques) applied to the object. The mapped grasps are- represented by high-dimensional feature vectors which describes the shape of the gripper. We generate a database of all possible grasps for each object in the feature space. Then we use a search and classification algorithm for intersecting all possible grasps over all parts and finding a single common grasp suitable for all objects. We present simulations of planar and spatial objects to validate the feasibility of the approach.

The Implementation of the Multi-Agent Classification System (MACS) in Compliance with FIPA Specifications

The paper discusses the implementation of the MultiAgent classification System (MACS) and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies, which are the .NET widows service based agents, the Windows Communication Foundation (WCF) services, the Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW. The Monitoring Agents (MAs) were configured to execute automatically to monitor excel spreadsheets development activities by content. Data gathered by the Monitoring Agents from various resources over a period of time was collected and filtered by a Database Updater Agent (DUA) residing in the .NET client application of the system. This agent then transfers and stores the data in Oracle server database via Oracle stored procedures for further processing that leads to the classification of the end user developers.

Using Multi-Linguistic Techniques for Thailand Herb and Traditional Medicine Registration Systems

Thailand has evolved many unique culture and knowledge, and the leading is the Thai traditional medicine (TTM). Recently, a number of researchers have tried to save this indigenous knowledge. However, the system to do so has still been scant. To preserve this ancient knowledge, we therefore invented and integrated multi-linguistic techniques to create the system of the collected all of recipes. This application extracted the medical recipes from antique scriptures then normalized antiquarian words, primitive grammar and antiquated measurement of them to the modern ones. Then, we applied ingredient-duplication-calculation, proportion-similarity-calculation and score-ranking to examine duplicate recipes. We collected the questionnaires from registrants and people to investigate the users’ satisfaction. The satisfactory results were found. This application assists not only registrants to validating the copyright violation in TTM registration process but also people to cure their illness that aids both Thai people and all mankind to fight for intractable diseases.

A Car Parking Monitoring System Using Wireless Sensor Networks

This paper presents a car parking monitoring system using wireless sensor networks. Multiple sensor nodes and a sink node, a gateway, and a server constitute a wireless network for monitoring a parking lot. Each of the sensor nodes is equipped with a 3-axis AMR sensor and deployed in the center of a parking space. Each sensor node reads its sensor values periodically and transmits the data to the sink node if the current and immediate past sensor values show a difference exceeding a threshold value. The sensor nodes and sink node use the 448 MHz band for wireless communication. Since RF transmission only occurs when sensor values show abrupt changes, the number of RF transmission operations is reduced and battery power can be conserved. The data from the sensor nodes reach the server via the sink node and gateway. The server determines which parking spaces are taken by cars based upon the received sensor data and reference values. The reference values are average sensor values measured by each sensor node when the corresponding parking spot is not occupied by a vehicle. Because the decision making is done by the server, the computational burden of the sensor node is relieved, which helps reduce the duty cycle of the sensor node.

Technique for Voltage Control in Distribution System

This paper presents the techniques for voltage control in distribution system. It is integrated in the distribution management system. Voltage is an important parameter for the control of electrical power systems. The distribution network operators have the responsibility to regulate the voltage supplied to consumer within statutory limits. Traditionally, the On-Load Tap Changer (OLTC) transformer equipped with automatic voltage control (AVC) relays is the most popular and effective voltage control device. A static synchronous compensator (STATCOM) may be equipped with several controllers to perform multiple control functions. Static Var Compensation (SVC) is regulation slopes and available margins for var dispatch. The voltage control in distribution networks is established as a centralized analytical function in this paper.