Abstract: A combined three-microphone voice activity detector (VAD) and noise-canceling system is studied to enhance speech recognition in an automobile environment. A previous experiment clearly shows the ability of the composite system to cancel a single noise source outside of a defined zone. This paper investigates the performance of the composite system when there are frequently moving noise sources (noise sources are coming from different locations but are not always presented at the same time) e.g. there is other passenger speech or speech from a radio when a desired speech is presented. To work in a frequently moving noise sources environment, whilst a three-microphone voice activity detector (VAD) detects voice from a “VAD valid zone", the 3-microphone noise canceller uses a “noise canceller valid zone" defined in freespace around the users head. Therefore, a desired voice should be in the intersection of the noise canceller valid zone and VAD valid zone. Thus all noise is suppressed outside this intersection of area. Experiments are shown for a real environment e.g. all results were recorded in a car by omni-directional electret condenser microphones.
Abstract: For identifying the discriminative sequence features between exons and introns, a new paradigm, rescaled-range frameshift analysis (RRFA), was proposed. By RRFA, two new
sequence features, the frameshift sensitivity (FS) and the accumulative
penta-mer complexity (APC), were discovered which
were further integrated into a new feature of larger scale, the persistency in anti-mutation (PAM). The feature-validation experiments
were performed on six model organisms to test the power
of discrimination. All the experimental results highly support that FS, APC and PAM were all distinguishing features between exons
and introns. These identified new sequence features provide new insights into the sequence composition of genes and they have
great potentials of forming a new basis for recognizing the exonintron boundaries in gene sequences.
Abstract: During the last few years, several sheet hydroforming
processes have been introduced. Despite the advantages of these
methods, they have some limitations. Of the processes, the two main
ones are the standard hydroforming and hydromechanical deep
drawing. A new sheet hydroforming die set was proposed that has the
advantages of both processes and eliminates their limitations. In this
method, a polyurethane plate was used as a part of the die-set to
control the blank holder force. This paper outlines the Taguchi
optimization methodology, which is applied to optimize the effective
parameters in forming cylindrical cups by the new die set of sheet
hydroforming process. The process parameters evaluated in this
research are polyurethane hardness, polyurethane thickness, forming
pressure path and polyurethane hole diameter. The design of
experiments based upon L9 orthogonal arrays by Taguchi was used
and analysis of variance (ANOVA) was employed to analyze the
effect of these parameters on the forming pressure. The analysis of
the results showed that the optimal combination for low forming
pressure is harder polyurethane, bigger diameter of polyurethane hole
and thinner polyurethane. Finally, the confirmation test was derived
based on the optimal combination of parameters and it was shown
that the Taguchi method is suitable to examine the optimization
process.
Abstract: Development of levels of service in municipal context
is a flexible vehicle to assist in performing quality-cost trade-off
analysis for municipal services. This trade-off depends on the
willingness of a community to pay as well as on the condition of the
assets. Community perspective of the performance of an asset from
service point of view may be quite different from the municipality
perspective of the performance of the same asset from condition
point of view. This paper presents a three phased level of service
based methodology for water mains that consists of :1)development
of an Analytical Hierarchy model of level of service 2) development
of Fuzzy Weighted Sum model of water main condition index and 3)
deriving a Fuzzy logic based function that maps level of service to
asset condition index. This mapping will assist asset managers in
quantifying condition improvement requirement to meet service
goals and to make more informed decisions on interventions and
relayed priorities.
Abstract: Building inspection is one of the key components of building maintenance. The primary purpose of performing a building inspection is to evaluate the building-s condition. Without inspection, it is difficult to determine a built asset-s current condition, so failure to inspect can contribute to the asset-s future failure. Traditionally, a longhand survey description has been widely used for property condition reports. Surveys that employ ratings instead of descriptions are gaining wide acceptance in the industry because they cater to the need for numerical analysis output. These kinds of surveys are also in keeping with the new RICS HomeBuyer Report 2009. In this paper, we propose a new assessment method, derived from the current rating systems, for assessing the specifically smart school building-s condition and rating the seriousness of each defect identified. These two assessment criteria are then multiplied to find the building-s score, which we called the Condition Survey Protocol (CSP) 1 Matrix. Instead of a longhand description of a building-s defects, this matrix requires concise explanations about the defects identified, thus saving on-site time during a smart school building inspection. The full score is used to give the building an overall rating: Good, Fair or Dilapidated.
Abstract: In this article, a simulation method called the Homotopy Perturbation Method (HPM) is employed in the steady flow of a Walter's B' fluid in a vertical channel with porous wall. We employed Homotopy Perturbation Method to derive solution of a nonlinear form of equation obtained from exerting similarity transforming to the ordinary differential equation gained from continuity and momentum equations of this kind of flow. The results obtained from the Homotopy Perturbation Method are then compared with those from the Runge–Kutta method in order to verify the accuracy of the proposed method. The results show that the Homotopy Perturbation Method can achieve good results in predicting the solution of such problems. Ultimately we use this solution to obtain the other terms of velocities and physical discussion about it.
Abstract: With the exponentially increasing demand for
wireless communications the capacity of current cellular systems will
soon become incapable of handling the growing traffic. Since radio
frequencies are diminishing natural resources, there seems to be a
fundamental barrier to further capacity increase. The solution can be
found in smart antenna systems.
Smart or adaptive antenna arrays consist of an array of antenna
elements with signal processing capability, that optimize the
radiation and reception of a desired signal, dynamically. Smart
antennas can place nulls in the direction of interferers via adaptive
updating of weights linked to each antenna element. They thus cancel
out most of the co-channel interference resulting in better quality of
reception and lower dropped calls. Smart antennas can also track the
user within a cell via direction of arrival algorithms. This implies that
they are more advantageous than other antenna systems. This paper
focuses on few issues about the smart antennas in mobile radio
networks.
Abstract: Linear convolutive filters are fast in calculation and in application, and thus, often used for real-time processing of continuous data streams. In the case of transient signals, a filter has not only to detect the presence of a specific waveform, but to estimate its arrival time as well. In this study, a measure is presented which indicates the performance of detectors in achieving both of these tasks simultaneously. Furthermore, a new sub-class of linear filters within the class of filters which minimize the quadratic response is proposed. The proposed filters are more flexible than the existing ones, like the adaptive matched filter or the minimum power distortionless response beamformer, and prove to be superior with respect to that measure in certain settings. Simulations of a real-time scenario confirm the advantage of these filters as well as the usefulness of the performance measure.
Abstract: Lightweight ceramic materials in the form of bricks
and blocks are widely used in modern construction. They may be
obtained by adding of rice husk, rye straw, etc, as porous forming
materials. Rice husk is a major by-product of the rice milling
industry. Its utilization as a valuable product has always been a
problem. Various technologies for utilization of rice husk through
biological and thermochemical conversion are being developed.
The purpose of this work is to develop lightweight ceramic
materials with clay matrix and filler of rice husk and examine their
main physicomechanical properties. The results obtained allow to
suppose that the materials synthesized on the basis of waste materials
can be used as lightweight materials for construction purpose.
Abstract: In the forming of ceramic materials the plasticity
concept is commonly used. This term is related to a particular
mechanical behavior when clay is mixed with water. A plastic
ceramic material shows a permanent strain without rupture
when a compressive load produces a shear stress that exceeds
the material-s yield strength. For a plastic ceramic body it
observes a measurable elastic behavior before the yield
strength and when the applied load is removed. In this work, a
mathematical model was developed from applied concepts of
the plasticity theory by using the stress/strain diagram under
compression.
Abstract: The survival of publicly listed companies largely
depends on their stocks being liquidly traded. This goal can be
achieved when new investors are attracted to invest on companies-
stocks. Among different groups of investors, individual investors are
generally less able to objectively evaluate companies- risks and
returns, and tend to be emotionally biased in their investing
decisions. Therefore their decisions may be formed as a result of
perceived risks and returns, and influenced by companies- images.
This study finds that perceived risk, perceived returns and trust
directly affect individual investors- trading decisions while attitude
towards brand partially mediates the relationships. This finding
suggests that, in courting individual investors, companies still need to
perform financially while building a good image can result in their
stocks being accepted quicker than the stocks of good performing
companies with hidden images.
Abstract: The major focus of this work was to characterize hydrodynamics in a packed-bed with and without static mixer by using Computational Fluid Dynamic (CFD). The commercial software: COMSOL MULTIPHYSICSTM Version 3.3 was used to simulate flow fields of mixed-gas reactants i.e. CO and H2. The packed-bed was a single tube with the inside diameter of 0.8 cm and the length of 1.2 cm. The static mixer was inserted inside the tube. The number of twisting elements was 1 with 0.8 cm in diameter and 1.2 cm in length. The packed-bed with and without static mixer were both packed with approximately 700 spherical structures representing catalyst pellets. Incompressible Navier-Stokes equations were used to model the gas flow inside the beds at steady state condition, in which the inlet Reynolds Number (Re) was 2.31. The results revealed that, with the insertion of static mixer, the gas was forced to flow radially inward and outward between the central portion of the tube and the tube wall. This could help improving the overall performance of the packed-bed, which could be utilized for heterogeneous catalytic reaction such as reforming and Fischer- Tropsch reactions.
Abstract: In the present study, the incorporation of graphene
into blends of acrylonitrile-butadiene-styrene terpolymer with
polypropylene (ABS/PP) was investigated focusing on the
improvement of their thermomechanical characteristics and the effect
on their rheological behavior. The blends were prepared by melt
mixing in a twin-screw extruder and were characterized by measuring
the MFI as well as by performing DSC, TGA and mechanical tests.
The addition of graphene to ABS/PP blends tends to increase their
melt viscosity, due to the confinement of polymer chains motion.
Also, graphene causes an increment of the crystallization temperature
(Tc), especially in blends with higher PP content, because of the
reduction of surface energy of PP nucleation, which is a consequence
of the attachment of PP chains to the surface of graphene through the
intermolecular CH-π interaction. Moreover, the above nanofiller
improves the thermal stability of PP and increases the residue of
thermal degradation at all the investigated compositions of blends,
due to the thermal isolation effect and the mass transport barrier
effect. Regarding the mechanical properties, the addition of graphene
improves the elastic modulus, because of its intrinsic mechanical
characteristics and its rigidity, and this effect is particularly strong in
the case of pure PP.
Abstract: Caching was suggested as a solution for reducing bandwidth utilization and minimizing query latency in mobile environments. Over the years, different caching approaches have been proposed, some relying on the server to broadcast reports periodically informing of the updated data while others allowed the clients to request for the data whenever needed. Until recently a hybrid cache consistency scheme Scalable Asynchronous Cache Consistency Scheme SACCS was proposed, which combined the two different approaches benefits- and is proved to be more efficient and scalable. Nevertheless, caching has its limitations too, due to the limited cache size and the limited bandwidth, which makes the implementation of cache replacement strategy an important aspect for improving the cache consistency algorithms. In this thesis, we proposed a new cache replacement strategy, the Least Unified Value strategy (LUV) to replace the Least Recently Used (LRU) that SACCS was based on. This paper studies the advantages and the drawbacks of the new proposed strategy, comparing it with different categories of cache replacement strategies.
Abstract: GSM has undoubtedly become the most widespread
cellular technology and has established itself as one of the most
promising technology in wireless communication. The next
generation of mobile telephones had also become more powerful and
innovative in a way that new services related to the user-s location
will arise. Other than the 911 requirements for emergency location
initiated by the Federal Communication Commission (FCC) of the
United States, GSM positioning can be highly integrated in cellular
communication technology for commercial use. However, GSM
positioning is facing many challenges. Issues like accuracy,
availability, reliability and suitable cost render the development and
implementation of GSM positioning a challenging task. In this paper,
we investigate the optimal mobile position tracking means. We
employ an innovative scheme by integrating the Kalman filter in the
localization process especially that it has great tracking
characteristics. When tracking in two dimensions, Kalman filter is
very powerful due to its reliable performance as it supports
estimation of past, present, and future states, even when performing
in unknown environments. We show that enhanced position tracking
results is achieved when implementing the Kalman filter for GSM
tracking.
Abstract: In the article the experience of principle new
technology development of ethnopsychological experiment on the
basis of using other virtual independent experimental variables is
presented. It is shown that ethnic prejudices are the result of forming
and development of specific semantic barriers, arising up in the
conditions of interethnic co-operation and people-s communication.
Their overcoming is more successful in the conditions of the special
organized process of teaching in a polyethnic environment,
characteristic for the modern institute
Abstract: This paper describes a novel approach for deriving
modules from protein-protein interaction networks, which combines
functional information with topological properties of the network.
This approach is based on weighted clustering coefficient, which
uses weights representing the functional similarities between the
proteins. These weights are calculated according to the semantic
similarity between the proteins, which is based on their Gene
Ontology terms. We recently proposed an algorithm for identification
of functional modules, called SWEMODE (Semantic WEights for
MODule Elucidation), that identifies dense sub-graphs containing
functionally similar proteins. The rational underlying this approach is
that each module can be reduced to a set of triangles (protein triplets
connected to each other). Here, we propose considering semantic
similarity weights of all triangle-forming edges between proteins. We
also apply varying semantic similarity thresholds between
neighbours of each node that are not neighbours to each other (and
hereby do not form a triangle), to derive new potential triangles to
include in module-defining procedure. The results show an
improvement of pure topological approach, in terms of number of
predicted modules that match known complexes.
Abstract: This paper developed the c-Chart based on a Zero- Inflated Poisson (ZIP) processes that approximated by a geometric distribution with parameter p. The p estimated that fit for ZIP distribution used in calculated the mean, median, and variance of geometric distribution for constructed the c-Chart by three difference methods. For cg-Chart, developed c-Chart by used the mean and variance of the geometric distribution constructed control limits. For cmg-Chart, the mean used for constructed the control limits. The cme- Chart, developed control limits of c-Chart from median and variance values of geometric distribution. The performance of charts considered from the Average Run Length and Average Coverage Probability. We found that for an in-control process, the cg-Chart is superior for low level of mean at all level of proportion zero. For an out-of-control process, the cmg-Chart and cme-Chart are the best for mean = 2, 3 and 4 at all level of parameter.
Abstract: This study investigated the effect of cross sectional
geometry on sediment transport rate. The processes of sediment
transport are generally associated to environmental management,
such as pollution caused by the forming of suspended sediment in the
channel network of a watershed and preserving physical habitats and
native vegetations, and engineering applications, such as the
influence of sediment transport on hydraulic structures and flood
control design. Many equations have been proposed for computing
the sediment transport, the influence of many variables on sediment
transport has been understood; however, the effect of other variables
still requires further research. For open channel flow, sediment
transport capacity is recognized to be a function of friction slope,
flow velocity, grain size, grain roughness and form roughness, the
hydraulic radius of the bed section and the type and quantity of
vegetation cover. The effect of cross sectional geometry of the
channel on sediment transport is one of the variables that need
additional investigation. The width-depth ratio (W/d) is a
comparative indicator of the channel shape. The width is the total
distance across the channel and the depth is the mean depth of the
channel. The mean depth is best calculated as total cross-sectional
area divided by the top width. Channels with high W/d ratios tend to
be shallow and wide, while channels with low (W/d) ratios tend to be
narrow and deep. In this study, the effects of the width-depth ratio on
sediment transport was demonstrated theoretically by inserting the
shape factor in sediment continuity equation and analytically by
utilizing the field data sets for Yalobusha River. It was found by
utilizing the two approaches as a width-depth ratio increases the
sediment transport decreases.
Abstract: This paper proposes a vertical beamforming concept
to a cellular network employing Fractional Frequency Reuse
technique including with cell sectorization. Two different beams are
utilized in cell-center and cell-edge, separately. The proposed concept
is validated through computer simulation in term of SINR and
channel capacity. Also, comparison when utilizing horizontal and
vertical beam formation is in focus. The obtained results indicate
that the proposed concept can improve the performance of the
cellular networks comparing with the one using horizontal
beamforming.