Topology Optimization of Aircraft Fuselage Structure

Topology Optimization is a defined as the method of determining optimal distribution of material for the assumed design space with functionality, loads and boundary conditions [1]. Topology optimization can be used to optimize shape for the purposes of weight reduction, minimizing material requirements or selecting cost effective materials [2]. Topology optimization has been implemented through the use of finite element methods for the analysis, and optimization techniques based on the method of moving asymptotes, genetic algorithms, optimality criteria method, level sets and topological derivatives. Case study of Typical “Fuselage design" is considered for this paper to explain the benefits of Topology Optimization in the design cycle. A cylindrical shell is assumed as the design space and aerospace standard pay loads were applied on the fuselage with wing attachments as constraints. Then topological optimization is done using Finite Element (FE) based software. This optimization results in the structural concept design which satisfies all the design constraints using minimum material.

Reliability Optimization for 3G Cellular Access Networks

This paper address the network reliability optimization problem in the optical access network design for the 3G cellular systems. We presents a novel 0-1 integer programming model for designing optical access network topologies comprised of multi-rings with common-edge in order to guarantee always-on services. The results show that the proposed model yields access network topologies with the optimal reliablity and satisfies both network cost limitations and traffic demand requirements.

EEG-Based Fractal Analysis of Different Motor Imagery Tasks using Critical Exponent Method

The objective of this paper is to characterize the spontaneous Electroencephalogram (EEG) signals of four different motor imagery tasks and to show hereby a possible solution for the present binary communication between the brain and a machine ora Brain-Computer Interface (BCI). The processing technique used in this paper was the fractal analysis evaluated by the Critical Exponent Method (CEM). The EEG signal was registered in 5 healthy subjects,sampling 15 measuring channels at 1024 Hz.Each channel was preprocessed by the Laplacian space ltering so as to reduce the space blur and therefore increase the spaceresolution. The EEG of each channel was segmented and its Fractaldimension (FD) calculated. The FD was evaluated in the time interval corresponding to the motor imagery and averaged out for all the subjects (each channel). In order to characterize the FD distribution,the linear regression curves of FD over the electrodes position were applied. The differences FD between the proposed mental tasks are quantied and evaluated for each experimental subject. The obtained results of the proposed method are a substantial fractal dimension in the EEG signal of motor imagery tasks and can be considerably utilized as the multiple-states BCI applications.

Morphology and Risk Factors for Blunt Aortic Trauma in Car Accidents - An Autopsy Study

Background: Blunt aortic trauma (BAT) includes various morphological changes that occur during deceleration, acceleration and/or body compression in traffic accidents. The various forms of BAT, from limited laceration of the intima to complete transection of the aorta, depends on the force acting on the vessel wall and the tolerance of the aorta to injury. The force depends on the change in velocity, the dynamics of the accident and of the seating position in the car. Tolerance to aortic injury depends on the anatomy, histological structure and pathomorphological alterations due to aging or disease of the aortic wall. An overview of the literature and medical documentation reveals that different terms are used to describe certain forms of BAT, which can lead to misinterpretation of findings or diagnoses. We therefore, propose a classification that would enable uniform systematic screening of all forms of BAT. We have classified BAT into three morphologycal types: TYPE I (intramural), TYPE II (transmural) and TYPE III (multiple) aortic ruptures with appropriate subtypes. Methods: All car accident casualties examined at the Institute of Forensic Medicine from 2001 to 2009 were included in this retrospective study. Autopsy reports were used to determine the occurrence of each morphological type of BAT in deceased drivers, front seat passengers and other passengers in cars and to define the morphology of BAT in relation to the accident dynamics and the age of the fatalities. Results: A total of 391 fatalities in car accidents were included in the study. TYPE I, TYPE II and TYPE III BAT were observed in 10,9%, 55,6% and 33,5%, respectively. The incidence of BAT in drivers, front seat and other passengers was 36,7%, 43,1% and 28,6%, respectively. In frontal collisions, the incidence of BAT was 32,7%, in lateral collisions 54,2%, and in other traffic accidents 29,3%. The average age of fatalities with BAT was 42,8 years and of those without BAT 39,1 years. Conclusion: Identification and early recognition of the risk factors of BAT following a traffic accident is crucial for successful treatment of patients with BAT. Front seat passengers over 50 years of age who have been injured in a lateral collision are the most at risk of BAT.

An Efficient Data Collection Approach for Wireless Sensor Networks

One of the most important applications of wireless sensor networks is data collection. This paper proposes as efficient approach for data collection in wireless sensor networks by introducing Member Forward List. This list includes the nodes with highest priority for forwarding the data. When a node fails or dies, this list is used to select the next node with higher priority. The benefit of this node is that it prevents the algorithm from repeating when a node fails or dies. The results show that Member Forward List decreases power consumption and latency in wireless sensor networks.

The Effects of Peristalsis on Dispersion of a Micropolar Fluid in the Presence of Magnetic Field

The paper presents an analytical solution for dispersion of a solute in the peristaltic motion of a micropolar fluid in the presence of magnetic field and both homogeneous and heterogeneous chemical reactions. The average effective dispersion coefficient has been found using Taylor-s limiting condition under long wavelength approximation. The effects of various relevant parameters on the average coefficient of dispersion have been studied. The average effective dispersion coefficient increases with amplitude ratio, cross viscosity coefficient and heterogeneous chemical reaction rate parameter. But it decreases with magnetic field parameter and homogeneous chemical reaction rate parameter. It can be noted that the presence of peristalsis enhances dispersion of a solute.

Exergy Analysis of Combined Cycle of Air Separation and Natural Gas Liquefaction

This paper presented a novel combined cycle of air separation and natural gas liquefaction. The idea is that natural gas can be liquefied, meanwhile gaseous or liquid nitrogen and oxygen are produced in one combined cryogenic system. Cycle simulation and exergy analysis were performed to evaluate the process and thereby reveal the influence of the crucial parameter, i.e., flow rate ratio through two stages expanders β on heat transfer temperature difference, its distribution and consequent exergy loss. Composite curves for the combined hot streams (feeding natural gas and recycled nitrogen) and the cold stream showed the degree of optimization available in this process if appropriate β was designed. The results indicated that increasing β reduces temperature difference and exergy loss in heat exchange process. However, the maximum limit value of β should be confined in terms of minimum temperature difference proposed in heat exchanger design standard and heat exchanger size. The optimal βopt under different operation conditions corresponding to the required minimum temperature differences was investigated.

A Robust Method for Hand Tracking Using Mean-shift Algorithm and Kalman Filter in Stereo Color Image Sequences

Real-time hand tracking is a challenging task in many computer vision applications such as gesture recognition. This paper proposes a robust method for hand tracking in a complex environment using Mean-shift analysis and Kalman filter in conjunction with 3D depth map. The depth information solve the overlapping problem between hands and face, which is obtained by passive stereo measuring based on cross correlation and the known calibration data of the cameras. Mean-shift analysis uses the gradient of Bhattacharyya coefficient as a similarity function to derive the candidate of the hand that is most similar to a given hand target model. And then, Kalman filter is used to estimate the position of the hand target. The results of hand tracking, tested on various video sequences, are robust to changes in shape as well as partial occlusion.

Revised PLWAP Tree with Non-frequent Items for Mining Sequential Pattern

Sequential pattern mining is a challenging task in data mining area with large applications. One among those applications is mining patterns from weblog. Recent times, weblog is highly dynamic and some of them may become absolute over time. In addition, users may frequently change the threshold value during the data mining process until acquiring required output or mining interesting rules. Some of the recently proposed algorithms for mining weblog, build the tree with two scans and always consume large time and space. In this paper, we build Revised PLWAP with Non-frequent Items (RePLNI-tree) with single scan for all items. While mining sequential patterns, the links related to the nonfrequent items are not considered. Hence, it is not required to delete or maintain the information of nodes while revising the tree for mining updated transactions. The algorithm supports both incremental and interactive mining. It is not required to re-compute the patterns each time, while weblog is updated or minimum support changed. The performance of the proposed tree is better, even the size of incremental database is more than 50% of existing one. For evaluation purpose, we have used the benchmark weblog dataset and found that the performance of proposed tree is encouraging compared to some of the recently proposed approaches.

Proposal of Additional Fuzzy Membership Functions in Smoothing Transition Autoregressive Models

In this paper we present, propose and examine additional membership functions for the Smoothing Transition Autoregressive (STAR) models. More specifically, we present the tangent hyperbolic, Gaussian and Generalized bell functions. Because Smoothing Transition Autoregressive (STAR) models follow fuzzy logic approach, more fuzzy membership functions should be tested. Furthermore, fuzzy rules can be incorporated or other training or computational methods can be applied as the error backpropagation or genetic algorithm instead to nonlinear squares. We examine two macroeconomic variables of US economy, the inflation rate and the 6-monthly treasury bills interest rates.

Neuro-fuzzy Classification System for Wireless-Capsule Endoscopic Images

In this research study, an intelligent detection system to support medical diagnosis and detection of abnormal lesions by processing endoscopic images is presented. The images used in this study have been obtained using the M2A Swallowable Imaging Capsule - a patented, video color-imaging disposable capsule. Schemes have been developed to extract texture features from the fuzzy texture spectra in the chromatic and achromatic domains for a selected region of interest from each color component histogram of endoscopic images. The implementation of an advanced fuzzy inference neural network which combines fuzzy systems and artificial neural networks and the concept of fusion of multiple classifiers dedicated to specific feature parameters have been also adopted in this paper. The achieved high detection accuracy of the proposed system has provided thus an indication that such intelligent schemes could be used as a supplementary diagnostic tool in endoscopy.

Visualization of Searching and Sorting Algorithms

Sequences of execution of algorithms in an interactive manner using multimedia tools are employed in this paper. It helps to realize the concept of fundamentals of algorithms such as searching and sorting method in a simple manner. Visualization gains more attention than theoretical study and it is an easy way of learning process. We propose methods for finding runtime sequence of each algorithm in an interactive way and aims to overcome the drawbacks of the existing character systems. System illustrates each and every step clearly using text and animation. Comparisons of its time complexity have been carried out and results show that our approach provides better perceptive of algorithms.

Optimal Embedded Generation Allocation in Distribution System Employing Real Coded Genetic Algorithm Method

This paper proposes a new methodology for the optimal allocation and sizing of Embedded Generation (EG) employing Real Coded Genetic Algorithm (RCGA) to minimize the total power losses and to improve voltage profiles in the radial distribution networks. RCGA is a method that uses continuous floating numbers as representation which is different from conventional binary numbers. The RCGA is used as solution tool, which can determine the optimal location and size of EG in radial system simultaneously. This method is developed in MATLAB. The effect of EG units- installation and their sizing to the distribution networks are demonstrated using 24 bus system.

Using a Trust-Based Environment Key for Mobile Agent Code Protection

Human activities are increasingly based on the use of remote resources and services, and on the interaction between remotely located parties that may know little about each other. Mobile agents must be prepared to execute on different hosts with various environmental security conditions. The aim of this paper is to propose a trust based mechanism to improve the security of mobile agents and allow their execution in various environments. Thus, an adaptive trust mechanism is proposed. It is based on the dynamic interaction between the agent and the environment. Information collected during the interaction enables generation of an environment key. This key informs on the host-s trust degree and permits the mobile agent to adapt its execution. Trust estimation is based on concrete parameters values. Thus, in case of distrust, the source of problem can be located and a mobile agent appropriate behavior can be selected.

Cryptanalysis of Two-Factor Authenticated Key Exchange Protocol in Public Wireless LANs

In Public Wireless LANs(PWLANs), user anonymity is an essential issue. Recently, Juang et al. proposed an anonymous authentication and key exchange protocol using smart cards in PWLANs. They claimed that their proposed scheme provided identity privacy, mutual authentication, and half-forward secrecy. In this paper, we point out that Juang et al.'s protocol is vulnerable to the stolen-verifier attack and does not satisfy user anonymity.

Climate Change and Environmental Education: The Application of Concept Map for Representing the Knowledge Complexity of Climate Change

It has formed an essential issue that Climate Change, composed of highly knowledge complexity, reveals its significant impact on human existence. Therefore, specific national policies, some of which present the educational aspects, have been published for overcoming the imperative problem. Accordingly, the study aims to analyze as well as integrate the relationship between Climate Change and environmental education and apply the perspective of concept map to represent the knowledge contents and structures of Climate Change; by doing so, knowledge contents of Climate Change could be represented in an even more comprehensive way and manipulated as the tool for environmental education. The method adapted for this study is knowledge conversion model compounded of the platform for experts and teachers, who were the participants for this study, to cooperate and combine each participant-s standpoints into a complete knowledge framework that is the foundation for structuring the concept map. The result of this research contains the important concepts, the precise propositions and the entire concept map for representing the robust concepts of Climate Change.

Pipelined Control-Path Effects on Area and Performance of a Wormhole-Switched Network-on-Chip

This paper presents design trade-off and performance impacts of the amount of pipeline phase of control path signals in a wormhole-switched network-on-chip (NoC). The numbers of the pipeline phase of the control path vary between two- and one-cycle pipeline phase. The control paths consist of the routing request paths for output selection and the arbitration paths for input selection. Data communications between on-chip routers are implemented synchronously and for quality of service, the inter-router data transports are controlled by using a link-level congestion control to avoid lose of data because of an overflow. The trade-off between the area (logic cell area) and the performance (bandwidth gain) of two proposed NoC router microarchitectures are presented in this paper. The performance evaluation is made by using a traffic scenario with different number of workloads under 2D mesh NoC topology using a static routing algorithm. By using a 130-nm CMOS standard-cell technology, our NoC routers can be clocked at 1 GHz, resulting in a high speed network link and high router bandwidth capacity of about 320 Gbit/s. Based on our experiments, the amount of control path pipeline stages gives more significant impact on the NoC performance than the impact on the logic area of the NoC router.

Improved Text-Independent Speaker Identification using Fused MFCC and IMFCC Feature Sets based on Gaussian Filter

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system has been used as a standard acoustic feature set for speech related applications. On a recent contribution by authors, it has been shown that the Inverted Mel- Frequency Cepstral Coefficients (IMFCC) is useful feature set for SI, which contains complementary information present in high frequency region. This paper introduces the Gaussian shaped filter (GF) while calculating MFCC and IMFCC in place of typical triangular shaped bins. The objective is to introduce a higher amount of correlation between subband outputs. The performances of both MFCC & IMFCC improve with GF over conventional triangular filter (TF) based implementation, individually as well as in combination. With GMM as speaker modeling paradigm, the performances of proposed GF based MFCC and IMFCC in individual and fused mode have been verified in two standard databases YOHO, (Microphone Speech) and POLYCOST (Telephone Speech) each of which has more than 130 speakers.

Increasing the Efficiency of Rake Receivers for Ultra-Wideband Applications

In diversity rich environments, such as in Ultra- Wideband (UWB) applications, the a priori determination of the number of strong diversity branches is difficult, because of the considerably large number of diversity paths, which are characterized by a variety of power delay profiles (PDPs). Several Rake implementations have been proposed in the past, in order to reduce the number of the estimated and combined paths. To this aim, we introduce two adaptive Rake receivers, which combine a subset of the resolvable paths considering simultaneously the quality of both the total combining output signal-to-noise ratio (SNR) and the individual SNR of each path. These schemes achieve better adaptation to channel conditions compared to other known receivers, without further increasing the complexity. Their performance is evaluated in different practical UWB channels, whose models are based on extensive propagation measurements. The proposed receivers compromise between the power consumption, complexity and performance gain for the additional paths, resulting in important savings in power and computational resources.

Dynamic Traffic Simulation for Traffic Congestion Problem Using an Enhanced Algorithm

Traffic congestion has become a major problem in many countries. One of the main causes of traffic congestion is due to road merges. Vehicles tend to move slower when they reach the merging point. In this paper, an enhanced algorithm for traffic simulation based on the fluid-dynamic algorithm and kinematic wave theory is proposed. The enhanced algorithm is used to study traffic congestion at a road merge. This paper also describes the development of a dynamic traffic simulation tool which is used as a scenario planning and to forecast traffic congestion level in a certain time based on defined parameter values. The tool incorporates the enhanced algorithm as well as the two original algorithms. Output from the three above mentioned algorithms are measured in terms of traffic queue length, travel time and the total number of vehicles passing through the merging point. This paper also suggests an efficient way of reducing traffic congestion at a road merge by analyzing the traffic queue length and travel time.