Abstract: this paper presented a survey analysis subjected on
network bandwidth management from published papers referred in
IEEE Explorer database in three years from 2009 to 2011. Network
Bandwidth Management is discussed in today-s issues for computer
engineering applications and systems. Detailed comparison is
presented between published papers to look further in the IP based
network critical research area for network bandwidth management.
Important information such as the network focus area, a few
modeling in the IP Based Network and filtering or scheduling used in
the network applications layer is presented. Many researches on
bandwidth management have been done in the broad network area
but fewer are done in IP Based network specifically at the
applications network layer. A few researches has contributed new
scheme or enhanced modeling but still the issue of bandwidth
management still arise at the applications network layer. This survey
is taken as a basic research towards implementations of network
bandwidth management technique, new framework model and
scheduling scheme or algorithm in an IP Based network which will
focus in a control bandwidth mechanism in prioritizing the network
traffic the applications layer.
Abstract: In this paper, a method for deriving a group priority vector in the Fuzzy Analytic Network Process (FANP) is proposed. By introducing importance weights of multiple decision makers (DMs) based on their experiences, the Fuzzy Preferences Programming Method (FPP) is extended to a fuzzy group prioritization problem in the FANP. Additionally, fuzzy pair-wise comparison judgments are presented rather than exact numerical assessments in order to model the uncertainty and imprecision in the DMs- judgments and then transform the fuzzy group prioritization problem into a fuzzy non-linear programming optimization problem which maximize the group satisfaction. Unlike the known fuzzy prioritization techniques, the new method proposed in this paper can easily derive crisp weights from incomplete and inconsistency fuzzy set of comparison judgments and does not require additional aggregation producers. Detailed numerical examples are used to illustrate the implement of our approach and compare with the latest fuzzy prioritization method.
Abstract: Segmentation techniques based on Active Contour
Models have been strongly benefited from the use of prior information
during their evolution. Shape prior information is captured from
a training set and is introduced in the optimization procedure to
restrict the evolution into allowable shapes. In this way, the evolution
converges onto regions even with weak boundaries. Although
significant effort has been devoted on different ways of capturing
and analyzing prior information, very little thought has been devoted
on the way of combining image information with prior information.
This paper focuses on a more natural way of incorporating the
prior information in the level set framework. For proof of concept
the method is applied on hippocampus segmentation in T1-MR
images. Hippocampus segmentation is a very challenging task, due
to the multivariate surrounding region and the missing boundary
with the neighboring amygdala, whose intensities are identical. The
proposed method, mimics the human segmentation way and thus
shows enhancements in the segmentation accuracy.
Abstract: Speedups from mapping four real-life DSP
applications on an embedded system-on-chip that couples coarsegrained
reconfigurable logic with an instruction-set processor are
presented. The reconfigurable logic is realized by a 2-Dimensional
Array of Processing Elements. A design flow for improving
application-s performance is proposed. Critical software parts, called
kernels, are accelerated on the Coarse-Grained Reconfigurable
Array. The kernels are detected by profiling the source code. For
mapping the detected kernels on the reconfigurable logic a prioritybased
mapping algorithm has been developed. Two 4x4 array
architectures, which differ in their interconnection structure among
the Processing Elements, are considered. The experiments for eight
different instances of a generic system show that important overall
application speedups have been reported for the four applications.
The performance improvements range from 1.86 to 3.67, with an
average value of 2.53, compared with an all-software execution.
These speedups are quite close to the maximum theoretical speedups
imposed by Amdahl-s law.
Abstract: Self-Excited Induction Generator (SEIG) builds up voltage while it enters in its magnetic saturation region. Due to non-linear magnetic characteristics, the performance analysis of SEIG involves cumbersome mathematical computations. The dependence of air-gap voltage on saturated magnetizing reactance can only be established at rated frequency by conducting a laboratory test commonly known as synchronous run test. But, there is no laboratory method to determine saturated magnetizing reactance and air-gap voltage of SEIG at varying speed, terminal capacitance and other loading conditions. For overall analysis of SEIG, prior information of magnetizing reactance, generated frequency and air-gap voltage is essentially required. Thus, analytical methods are the only alternative to determine these variables. Non-existence of direct mathematical relationship of these variables for different terminal conditions has forced the researchers to evolve new computational techniques. Artificial Neural Networks (ANNs) are very useful for solution of such complex problems, as they do not require any a priori information about the system. In this paper, an attempt is made to use cascaded neural networks to first determine the generated frequency and magnetizing reactance with varying terminal conditions and then air-gap voltage of SEIG. The results obtained from the ANN model are used to evaluate the overall performance of SEIG and are found to be in good agreement with experimental results. Hence, it is concluded that analysis of SEIG can be carried out effectively using ANNs.
Abstract: A feed-forward, back-propagation Artificial Neural
Network (ANN) model has been used to forecast the occurrences of
wastewater overflows in a combined sewerage reticulation system.
This approach was tested to evaluate its applicability as a method
alternative to the common practice of developing a complete
conceptual, mathematical hydrological-hydraulic model for the
sewerage system to enable such forecasts. The ANN approach
obviates the need for a-priori understanding and representation of the
underlying hydrological hydraulic phenomena in mathematical terms
but enables learning the characteristics of a sewer overflow from the
historical data.
The performance of the standard feed-forward, back-propagation
of error algorithm was enhanced by a modified data normalizing
technique that enabled the ANN model to extrapolate into the
territory that was unseen by the training data. The algorithm and the
data normalizing method are presented along with the ANN model
output results that indicate a good accuracy in the forecasted sewer
overflow rates. However, it was revealed that the accurate
forecasting of the overflow rates are heavily dependent on the
availability of a real-time flow monitoring at the overflow structure
to provide antecedent flow rate data. The ability of the ANN to
forecast the overflow rates without the antecedent flow rates (as is
the case with traditional conceptual reticulation models) was found to
be quite poor.
Abstract: We study a long-range percolation model in the hierarchical
lattice ΩN of order N where probability of connection between
two nodes separated by distance k is of the form min{αβ−k, 1},
α ≥ 0 and β > 0. The parameter α is the percolation parameter,
while β describes the long-range nature of the model. The ΩN is
an example of so called ultrametric space, which has remarkable
qualitative difference between Euclidean-type lattices. In this paper,
we characterize the sizes of large clusters for this model along the
line of some prior work. The proof involves a stationary embedding
of ΩN into Z. The phase diagram of this long-range percolation is
well understood.
Abstract: Shipping comb is mounted on Head Stack Assembly
(HSA) to prevent collision of the heads, maintain the gap between
suspensions and protect HSA tips from unintentional contact
damaged in the manufacturing process. Failure analysis of shipping
comb in hard disk drive production processes is proposed .Field
observations were performed to determine the fatal areas on shipping
comb and their failure fraction. Root cause failure analysis (RCFA) is
applied to specify the failure causes subjected to various loading
conditions. For reliability improvement, failure mode and effects
analysis (FMEA) procedure to evaluate the risk priority is performed.
Consequently, the more suitable information design criterions were
obtained.
Abstract: Identifying and classifying intersections according to
severity is very important for implementation of safety related
counter measures and effective models are needed to compare and
assess the severity. Highway safety organizations have considered
intersection safety among their priorities. In spite of significant
advances in highways safety, the large numbers of crashes with high
severities still occur in the highways. Investigation of influential
factors on crashes enables engineers to carry out calculations in order
to reduce crash severity. Previous studies lacked a model capable of
simultaneous illustration of the influence of human factors, road,
vehicle, weather conditions and traffic features including traffic
volume and flow speed on the crash severity. Thus, this paper is
aimed at developing the models to illustrate the simultaneous
influence of these variables on the crash severity in urban highways.
The models represented in this study have been developed using
binary Logit Models. SPSS software has been used to calibrate the
models. It must be mentioned that backward regression method in
SPSS was used to identify the significant variables in the model.
Consider to obtained results it can be concluded that the main
factor in increasing of crash severity in urban highways are driver
age, movement with reverse gear, technical defect of the vehicle,
vehicle collision with motorcycle and bicycle, bridge, frontal impact
collisions, frontal-lateral collisions and multi-vehicle crashes in
urban highways which always increase the crash severity in urban
highways.
Abstract: This paper describes text mining technique for automatically extracting association rules from collections of textual documents. The technique called, Extracting Association Rules from Text (EART). It depends on keyword features for discover association rules amongst keywords labeling the documents. In this work, the EART system ignores the order in which the words occur, but instead focusing on the words and their statistical distributions in documents. The main contributions of the technique are that it integrates XML technology with Information Retrieval scheme (TFIDF) (for keyword/feature selection that automatically selects the most discriminative keywords for use in association rules generation) and use Data Mining technique for association rules discovery. It consists of three phases: Text Preprocessing phase (transformation, filtration, stemming and indexing of the documents), Association Rule Mining (ARM) phase (applying our designed algorithm for Generating Association Rules based on Weighting scheme GARW) and Visualization phase (visualization of results). Experiments applied on WebPages news documents related to the outbreak of the bird flu disease. The extracted association rules contain important features and describe the informative news included in the documents collection. The performance of the EART system compared with another system that uses the Apriori algorithm throughout the execution time and evaluating extracted association rules.
Abstract: The problem of estimating time-varying regression is
inevitably concerned with the necessity to choose the appropriate
level of model volatility - ranging from the full stationarity of instant
regression models to their absolute independence of each other. In the
stationary case the number of regression coefficients to be estimated
equals that of regressors, whereas the absence of any smoothness
assumptions augments the dimension of the unknown vector by the
factor of the time-series length. The Akaike Information Criterion
is a commonly adopted means of adjusting a model to the given
data set within a succession of nested parametric model classes,
but its crucial restriction is that the classes are rigidly defined by
the growing integer-valued dimension of the unknown vector. To
make the Kullback information maximization principle underlying the
classical AIC applicable to the problem of time-varying regression
estimation, we extend it onto a wider class of data models in which
the dimension of the parameter is fixed, but the freedom of its values
is softly constrained by a family of continuously nested a priori
probability distributions.
Abstract: Bluetooth is a personal wireless communication
technology and is being applied in many scenarios. It is an emerging
standard for short range, low cost, low power wireless access
technology. Current existing MAC (Medium Access Control)
scheduling schemes only provide best-effort service for all masterslave
connections. It is very challenging to provide QoS (Quality of
Service) support for different connections due to the feature of
Master Driven TDD (Time Division Duplex). However, there is no
solution available to support both delay and bandwidth guarantees
required by real time applications. This paper addresses the issue of
how to enhance QoS support in a Bluetooth piconet. The Bluetooth
specification proposes a Round Robin scheduler as possible solution
for scheduling the transmissions in a Bluetooth Piconet. We propose
an algorithm which will reduce the bandwidth waste and enhance the
efficiency of network. We define token counters to estimate traffic of
real-time slaves. To increase bandwidth utilization, a back-off
mechanism is then presented for best-effort slaves to decrease the
frequency of polling idle slaves. Simulation results demonstrate that
our scheme achieves better performance over the Round Robin
scheduling.
Abstract: Solidification cracking and hydrogen cracking are some defects generated in the fusion welding of ultrahigh carbon steels. However, friction stir welding (FSW) of such steels, being a solid-state technique, has been demonstrated to alleviate such problems encountered in traditional welding. FSW include different process parameters that must be carefully defined prior processing. These parameters included but not restricted to: tool feed, tool RPM, tool geometry, tool tilt angle. These parameters form a key factor behind avoiding warm holes and voids behind the tool and in achieving a defect-free weld. More importantly, these parameters directly affect the microstructure of the weld and hence the final mechanical properties of weld. For that, 3D finite element (FE) thermo-mechanical model was developed using DEFORM 3D to simulate FSW of carbon steel. At points of interest in the joint, tracking is done for history of critical state variables such as temperature, stresses, and strain rates. Typical results found include the ability to simulate different weld zones. Simulations predictions were successfully compared to experimental FSW tests. It is believed that such a numerical model can be used to optimize FSW processing parameters to favor desirable defect free weld with better mechanical properties.
Abstract: The continued interest in the use of distributed generation in recent years is leading to the growth in number of distributed generators connected to distribution networks. Steady state voltage rise resulting from the connection of these generators can be a major obstacle to their connection at lower voltage levels. The present electric distribution network is designed to keep the customer voltage within tolerance limit. This may require a reduction in connectable generation capacity, under utilization of appropriate generation sites. Thus distribution network operators need a proper voltage regulation method to allow the significant integration of distributed generation systems to existing network. In this work a voltage rise problem in a typical distribution system has been studied. A method for voltage regulation of distribution system with multiple DG system by coordinated operation distributed generator, capacitor and OLTC has been developed. A sensitivity based analysis has been carried out to determine the priority for individual generators in multiple DG environment. The effectiveness of the developed method has been evaluated under various cases through simulation results.
Abstract: In this paper, a mathematical model is proposed to
estimate the dropping probabilities of cellular wireless networks by
queuing handoff instead of reserving guard channels. Usually, prioritized
handling of handoff calls is done with the help of guard channel
reservation. To evaluate the proposed model, gamma inter-arrival and
general service time distributions have been considered. Prevention of
some of the attempted calls from reaching to the switching center due
to electromagnetic propagation failure or whimsical user behaviour
(missed call, prepaid balance etc.), make the inter-arrival time of
the input traffic to follow gamma distribution. The performance is
evaluated and compared with that of guard channel scheme.
Abstract: The increasingly sophisticated technologies have now been able to provide assistance for surgeons to improve surgical
performance through various training programs. Equally important to learning skills is the assessment method as it determines the learning and technical proficiency of a trainee. A consistent and
rigorous assessment system will ensure that trainees acquire the specific level of competency prior to certification. This paper
reviews the methods currently in use for assessment of surgical
skill and some modern techniques using computer-based
measurements and virtual reality systems for more quantitative
measurements
Abstract: The use of High Order Statistics (HOS) analysis is
expected to provide so many candidates of features that can be selected for pattern recognition. More candidates of the feature can
be extracted using simple manipulation through a specific mathematical function prior to the HOS analysis. Feature extraction
method using HOS analysis combined with Difference to the Nth-Power manipulation has been examined in application for Automatic
Modulation Recognition (AMR) to perform scheme recognition of three digital modulation signal, i.e. QPSK-16QAM-64QAM in the
AWGN transmission channel. The simulation results is reported
when the analysis of HOS up to order-12 and the manipulation of Difference to the Nth-Power up to N = 4. The obtained accuracy rate
of AMR using the method of Simple Decision obtained 90% in SNR > 10 dB in its classifier, while using the method of Voted Decision is
96% in SNR > 2 dB.
Abstract: This research is aimed to compare the percentages of correct classification of Empirical Bayes method (EB) to Classical method when data are constructed as near normal, short-tailed and long-tailed symmetric, short-tailed and long-tailed asymmetric. The study is performed using conjugate prior, normal distribution with known mean and unknown variance. The estimated hyper-parameters obtained from EB method are replaced in the posterior predictive probability and used to predict new observations. Data are generated, consisting of training set and test set with the sample sizes 100, 200 and 500 for the binary classification. The results showed that EB method exhibited an improved performance over Classical method in all situations under study.
Abstract: Planning community has been long discussing emerging paradigms within the planning theory in the face of the changing conditions of the world order. The paradigm shift concept was introduced by Thomas Kuhn, in 1960, who claimed the necessity of shifting within scientific knowledge boundaries; and following him in 1970 Imre Loktas also gave priority to the emergence of multi-paradigm societies [24]. Multi-paradigm is changing our predetermined lifeworld through uncertainties. Those uncertainties are reflected in two sides, the first one is uncertainty as a concept of possibility and creativity in public sphere and the second one is uncertainty as a risk. Therefore, it is necessary to apply a resilience planning approach to be more dynamic in controlling uncertainties which have the potential to transfigure present time and space definitions. In this way, stability of system can be achieved. Uncertainty is not only an outcome of worldwide changes but also a place-specific issue, i.e. it changes from continent to continent, a country to country; a region to region. Therefore, applying strategic spatial planning with respect to resilience principle contributes to: control, grasp and internalize uncertainties through place-specific strategies. In today-s fast changing world, planning system should follow strategic spatial projects to control multi-paradigm societies with adaptability capacities. Here, we have selected two alternatives to demonstrate; these are; 1.Tehran (Iran) from the Middle East 2.Bath (United Kingdom) from Europe. The study elaborates uncertainties and particularities in their strategic spatial planning processes in a comparative manner. Through the comparison, the study aims at assessing place-specific priorities in strategic planning. The approach is to a two-way stream, where the case cities from the extreme end of the spectrum can learn from each other. The structure of this paper is to firstly compare semi-periphery (Tehran) and coreperiphery (Bath) cities, with the focus to reveal how they equip to face with uncertainties according to their geographical locations and local particularities. Secondly, the key message to address is “Each locality requires its own strategic planning approach to be resilient.--
Abstract: Biometrics methods include recognition techniques
such as fingerprint, iris, hand geometry, voice, face, ears and gait. The gait recognition approach has some advantages, for example it
does not need the prior concern of the observed subject and it can
record many biometric features in order to make deeper analysis, but
most of the research proposals use high computational cost. This
paper shows a gait recognition system with feature subtraction on a
bundle rectangle drawn over the observed person. Statistical results
within a database of 500 videos are shown.