Abstract: This paper presents a novel method for inferring the
odor based on neural activities observed from rats- main olfactory
bulbs. Multi-channel extra-cellular single unit recordings were done
by micro-wire electrodes (tungsten, 50μm, 32 channels) implanted in
the mitral/tufted cell layers of the main olfactory bulb of anesthetized
rats to obtain neural responses to various odors. Neural response
as a key feature was measured by substraction of neural firing rate
before stimulus from after. For odor inference, we have developed a
decoding method based on the maximum likelihood (ML) estimation.
The results have shown that the average decoding accuracy is about
100.0%, 96.0%, 84.0%, and 100.0% with four rats, respectively. This
work has profound implications for a novel brain-machine interface
system for odor inference.
Abstract: The proliferation of web application and the pervasiveness of mobile technology make web-based attacks even more attractive and even easier to launch. Web Application Firewall (WAF) is an intermediate tool between web server and users that provides comprehensive protection for web application. WAF is a negative security model where the detection and prevention mechanisms are based on predefined or user-defined attack signatures and patterns. However, WAF alone is not adequate to offer best defensive system against web vulnerabilities that are increasing in number and complexity daily. This paper presents a methodology to automatically design a positive security based model which identifies and allows only legitimate web queries. The paper shows a true positive rate of more than 90% can be achieved.
Abstract: Image compression plays a vital role in today-s
communication. The limitation in allocated bandwidth leads to
slower communication. To exchange the rate of transmission in the
limited bandwidth the Image data must be compressed before
transmission. Basically there are two types of compressions, 1)
LOSSY compression and 2) LOSSLESS compression. Lossy
compression though gives more compression compared to lossless
compression; the accuracy in retrievation is less in case of lossy
compression as compared to lossless compression. JPEG, JPEG2000
image compression system follows huffman coding for image
compression. JPEG 2000 coding system use wavelet transform,
which decompose the image into different levels, where the
coefficient in each sub band are uncorrelated from coefficient of
other sub bands. Embedded Zero tree wavelet (EZW) coding exploits
the multi-resolution properties of the wavelet transform to give a
computationally simple algorithm with better performance compared
to existing wavelet transforms. For further improvement of
compression applications other coding methods were recently been
suggested. An ANN base approach is one such method. Artificial
Neural Network has been applied to many problems in image
processing and has demonstrated their superiority over classical
methods when dealing with noisy or incomplete data for image
compression applications. The performance analysis of different
images is proposed with an analysis of EZW coding system with
Error Backpropagation algorithm. The implementation and analysis
shows approximately 30% more accuracy in retrieved image
compare to the existing EZW coding system.
Abstract: Most papers model Joint Replenishment Problem
(JRP) as a (kT,S) where kT is a multiple value for a common review
period T,and S is a predefined order up to level. In general the (T,S)
policy is characterized by a long out of control period which requires
a large amount of safety stock compared to the (R,Q) policy. In this
paper a probabilistic model is built where an item, call it item(i),
with the shortest order time between interval (T)is modeled under
(R,Q) policy and its inventory is continuously reviewed, while the
rest of items (j) are periodically reviewed at a definite time
corresponding to item
Abstract: The performance of high-resolution schemes is investigated for unsteady, inviscid and compressible multiphase flows. An Eulerian diffuse interface approach has been chosen for the simulation of multicomponent flow problems. The reduced fiveequation and seven equation models are used with HLL and HLLC approximation. The authors demonstrated the advantages and disadvantages of both seven equations and five equations models studying their performance with HLL and HLLC algorithms on simple test case. The seven equation model is based on two pressure, two velocity concept of Baer–Nunziato [10], while five equation model is based on the mixture velocity and pressure. The numerical evaluations of two variants of Riemann solvers have been conducted for the classical one-dimensional air-water shock tube and compared with analytical solution for error analysis.
Abstract: Determining depth of anesthesia is a challenging problem
in the context of biomedical signal processing. Various methods
have been suggested to determine a quantitative index as depth of
anesthesia, but most of these methods suffer from high sensitivity
during the surgery. A novel method based on energy scattering of
samples in the wavelet domain is suggested to represent the basic
content of electroencephalogram (EEG) signal. In this method, first
EEG signal is decomposed into different sub-bands, then samples
are squared and energy of samples sequence is constructed through
each scale and time, which is normalized and finally entropy of the
resulted sequences is suggested as a reliable index. Empirical Results
showed that applying the proposed method to the EEG signals can
classify the awake, moderate and deep anesthesia states similar to
BIS.
Abstract: This paper presents an adaptive motion estimator
that can be dynamically reconfigured by the best algorithm
depending on the variation of the video nature during the lifetime
of an application under running. The 4 Step Search (4SS) and the
Gradient Search (GS) algorithms are integrated in the estimator in
order to be used in the case of rapid and slow video sequences
respectively. The Full Search Block Matching (FSBM) algorithm
has been also integrated in order to be used in the case of the
video sequences which are not real time oriented.
In order to efficiently reduce the computational cost while
achieving better visual quality with low cost power, the proposed
motion estimator is based on a Variable Block Size (VBS) scheme
that uses only the 16x16, 16x8, 8x16 and 8x8 modes.
Experimental results show that the adaptive motion estimator
allows better results in term of Peak Signal to Noise Ratio
(PSNR), computational cost, FPGA occupied area, and dissipated
power relatively to the most popular variable block size schemes
presented in the literature.
Abstract: Lighvan cheese is basically made from sheep milk in
the area of Sahand mountainside which is located in the North West
of Iran. The main objective of this study was to investigate the effect
of enterococci isolated from traditional Lighvan cheese on the quality
of Iranian UF white during ripening. The experimental design was
split plot based on randomized complete blocks, main plots were four
types of starters and subplots were different ripening durations.
Addition of Enterococcus spp. did not significantly (P
Abstract: Freeze concentration freezes or crystallises the water
molecules out as ice crystals and leaves behind a highly concentrated
solution. In conventional suspension freeze concentration where ice
crystals formed as a suspension in the mother liquor, separation of
ice is difficult. The size of the ice crystals is still very limited which
will require usage of scraped surface heat exchangers, which is very
expensive and accounted for approximately 30% of the capital cost.
This research is conducted using a newer method of freeze
concentration, which is progressive freeze concentration. Ice crystals
were formed as a layer on the designed heat exchanger surface. In
this particular research, a helical structured copper crystallisation
chamber was designed and fabricated. The effect of two operating
conditions on the performance of the newly designed crystallisation
chamber was investigated, which are circulation flowrate and coolant
temperature. The performance of the design was evaluated by the
effective partition constant, K, calculated from the volume and
concentration of the solid and liquid phase. The system was also
monitored by a data acquisition tool in order to see the temperature
profile throughout the process. On completing the experimental
work, it was found that higher flowrate resulted in a lower K, which
translated into high efficiency. The efficiency is the highest at 1000
ml/min. It was also found that the process gives the highest
efficiency at a coolant temperature of -6 °C.
Abstract: This work is a proposed model of CMOS for which
the algorithm has been created and then the performance evaluation
of this proposition has been done. In this context, another commonly
used model called ZSTT (Zero Switching Time Transient) model is
chosen to compare all the vital features and the results for the
Proposed Equivalent CMOS are promising. In the end, the excerpts
of the created algorithm are also included
Abstract: Because of high ductility, aluminum alloys, have been widely used as an important base of metal forming industries. But the main week point of these alloys is their low strength so in forming them with conventional methods like deep drawing, hydro forming, etc have been always faced with problems like fracture during of forming process. Because of this, recently using of explosive forming method for forming of these plates has been recommended. In this paper free explosive forming of A2024 aluminum alloy is numerically simulated and during it, explosion wave propagation process is studied. Consequences of this simulation can be effective in prediction of quality of production. These consequences are compared with an experimental test and show the superiority of this method to similar methods like hydro forming and deep drawing.
Abstract: This paper focuses on a critical component of the situational awareness (SA), the neural control of depth flight of an autonomous underwater vehicle (AUV). Constant depth flight is a challenging but important task for AUVs to achieve high level of autonomy under adverse conditions. With the SA strategy, we proposed a multirate neural control of an AUV trajectory using neural network model reference controller for a nontrivial mid-small size AUV "r2D4" stochastic model. This control system has been demonstrated and evaluated by simulation of diving maneuvers using software package Simulink. From the simulation results it can be seen that the chosen AUV model is stable in the presence of high noise, and also can be concluded that the fast SA of similar AUV systems with economy in energy of batteries can be asserted during the underwater missions in search-and-rescue operations.
Abstract: Pattern matching is one of the fundamental applications in molecular biology. Searching DNA related data is a common activity for molecular biologists. In this paper we explore the applicability of a new pattern matching technique called Index based Forward Backward Multiple Pattern Matching algorithm(IFBMPM), for DNA Sequences. Our approach avoids unnecessary comparisons in the DNA Sequence due to this; the number of comparisons of the proposed algorithm is very less compared to other existing popular methods. The number of comparisons rapidly decreases and execution time decreases accordingly and shows better performance.
Abstract: This paper presents a longitudinal quasi-linear model for the ADMIRE model. The ADMIRE model is a nonlinear model of aircraft flying in the condition of high angle of attack. So it can-t be considered to be a linear system approximately. In this paper, for getting the longitudinal quasi-linear model of the ADMIRE, a state transformation based on differentiable functions of the nonscheduling states and control inputs is performed, with the goal of removing any nonlinear terms not dependent on the scheduling parameter. Since it needn-t linear approximation and can obtain the exact transformations of the nonlinear states, the above-mentioned approach is thought to be appropriate to establish the mathematical model of ADMIRE. To verify this conclusion, simulation experiments are done. And the result shows that this quasi-linear model is accurate enough.
Abstract: In recent years multi-agent systems have emerged as one of the interesting architectures facilitating distributed collaboration and distributed problem solving. Each node (agent) of the network might pursue its own agenda, exploit its environment, develop its own problem solving strategy and establish required communication strategies. Within each node of the network, one could encounter a diversity of problem-solving approaches. Quite commonly the agents can realize their processing at the level of information granules that is the most suitable from their local points of view. Information granules can come at various levels of granularity. Each agent could exploit a certain formalism of information granulation engaging a machinery of fuzzy sets, interval analysis, rough sets, just to name a few dominant technologies of granular computing. Having this in mind, arises a fundamental issue of forming effective interaction linkages between the agents so that they fully broadcast their findings and benefit from interacting with others.
Abstract: People detection from images has a variety of applications such as video surveillance and driver assistance system, but is still a challenging task and more difficult in crowded environments such as shopping malls in which occlusion of lower parts of human body often occurs. Lack of the full-body information requires more effective features than common features such as HOG. In this paper, new features are introduced that exploits global self-symmetry (GSS) characteristic in head-shoulder patterns. The features encode the similarity or difference of color histograms and oriented gradient histograms between two vertically symmetric blocks. The domain-specific features are rapid to compute from the integral images in Viola-Jones cascade-of-rejecters framework. The proposed features are evaluated with our own head-shoulder dataset that, in part, consists of a well-known INRIA pedestrian dataset. Experimental results show that the GSS features are effective in reduction of false alarmsmarginally and the gradient GSS features are preferred more often than the color GSS ones in the feature selection.
Abstract: Bloom filter is a probabilistic and memory efficient
data structure designed to answer rapidly whether an element is
present in a set. It tells that the element is definitely not in the set but
its presence is with certain probability. The trade-off to use Bloom
filter is a certain configurable risk of false positives. The odds of a
false positive can be made very low if the number of hash function is
sufficiently large. For spam detection, weight is attached to each set
of elements. The spam weight for a word is a measure used to rate the
e-mail. Each word is assigned to a Bloom filter based on its weight.
The proposed work introduces an enhanced concept in Bloom filter
called Bin Bloom Filter (BBF). The performance of BBF over
conventional Bloom filter is evaluated under various optimization
techniques. Real time data set and synthetic data sets are used for
experimental analysis and the results are demonstrated for bin sizes 4,
5, 6 and 7. Finally analyzing the results, it is found that the BBF
which uses heuristic techniques performs better than the traditional
Bloom filter in spam detection.
Abstract: Using one dimensional Quantum hydrodynamic
(QHD) model Korteweg de Vries (KdV) solitary excitations of
electron-acoustic waves (EAWs) have been examined in twoelectron-
populated relativistically degenerate super dense plasma. It
is found that relativistic degeneracy parameter influences the
conditions of formation and properties of solitary structures.
Abstract: This study was initiated with a three prong objective.
One, to identify the relationship between Technological
Competencies factors (Technical Capability, Firm Innovativeness
and E-Business Practices and professional service firms- business
performance. To investigate the predictors of professional service
firms business performance and finally to evaluate the predictors of
business performance according to the type of professional service
firms, a survey questionnaire was deployed to collect empirical data.
The questionnaire was distributed to the owners of the professional
small medium size enterprises services in the Accounting, Legal,
Engineering and Architecture sectors. Analysis showed that all three
Technology Competency factors have moderate effect on business
performance. In addition, the regression models indicate that
technical capability is the most highly influential that could
determine business performance, followed by e-business practices
and firm innovativeness. Subsequently, the main predictor of
business performance for all types of firms is Technical capability.
Abstract: In the paper we discuss the influence of the route
flexibility degree, the open rate of operations and the production type
coefficient on makespan. The flexible job-open shop scheduling
problem FJOSP (an extension of the classical job shop scheduling) is
analyzed. For the analysis of the production process we used a
hybrid heuristic of the GRASP (greedy randomized adaptive search
procedure) with simulated annealing algorithm. Experiments with
different levels of factors have been considered and compared. The
GRASP+SA algorithm has been tested and illustrated with results for
the serial route and the parallel one.