Abstract: Biochemical Oxygen Demand (BOD) is a measure of
the oxygen used in bacteria mediated oxidation of organic substances
in water and wastewater. Theoretically an infinite time is required for
complete biochemical oxidation of organic matter, but the
measurement is made over 5-days at 20 0C or 3-days at 27 0C test
period with or without dilution. Researchers have worked to further
reduce the time of measurement.
The objective of this paper is to review advancement made in
BOD measurement primarily to minimize the time and negate the
measurement difficulties. Survey of literature review in four such
techniques namely BOD-BARTTM, Biosensors, Ferricyanidemediated
approach, luminous bacterial immobilized chip method.
Basic principle, method of determination, data validation and their
advantage and disadvantages have been incorporated of each of the
methods.
In the BOD-BARTTM method the time lag is calculated for the
system to change from oxidative to reductive state. BIOSENSORS
are the biological sensing element with a transducer which produces
a signal proportional to the analyte concentration. Microbial species
has its metabolic deficiencies. Co-immobilization of bacteria using
sol-gel biosensor increases the range of substrate. In ferricyanidemediated
approach, ferricyanide has been used as e-acceptor instead
of oxygen. In Luminous bacterial cells-immobilized chip method,
bacterial bioluminescence which is caused by lux genes was
observed. Physiological responses is measured and correlated to
BOD due to reduction or emission.
There is a scope to further probe into the rapid estimation of BOD.
Abstract: Prolonged immobilization leads to significant
weakness and atrophy of the skeletal muscle and can also impair the
recovery of muscle strength following injury. Therefore, it is
important to minimize the period under immobilization and accelerate
the return to normal activity. This study examined the effects of heat
treatment and rest-inserted exercise on the muscle activity of the lower
limb during knee flexion/extension. Twelve healthy subjects were
assigned to 4 groups that included: (1) heat treatment + rest-inserted
exercise; (2) heat + continuous exercise; (3) no heat + rest-inserted
exercise; and (4) no heat + continuous exercise. Heat treatment was
applied for 15 mins prior to exercise. Continuous exercise groups
performed knee flexion/extension at 0.5 Hz for 300 cycles without rest
whereas rest-inserted exercise groups performed the same exercise but
with 2 mins rest inserted every 60 cycles of continuous exercise.
Changes in the rectus femoris and hamstring muscle activities were
assessed at 0, 1, and 2 weeks of treatment by measuring the
electromyography signals of isokinetic maximum voluntary
contraction. Significant increases in both the rectus femoris and
hamstring muscles were observed after 2 weeks of treatment only
when both heat treatment and rest-inserted exercise were performed.
These results suggest that combination of various treatment techniques,
such as heat treatment and rest-inserted exercise, may expedite the
recovery of muscle strength following immobilization.
Abstract: Finger spelling is an art of communicating by signs
made with fingers, and has been introduced into sign language to serve
as a bridge between the sign language and the verbal language.
Previous approaches to finger spelling recognition are classified into
two categories: glove-based and vision-based approaches. The
glove-based approach is simpler and more accurate recognizing work
of hand posture than vision-based, yet the interfaces require the user to
wear a cumbersome and carry a load of cables that connected the
device to a computer. In contrast, the vision-based approaches provide
an attractive alternative to the cumbersome interface, and promise
more natural and unobtrusive human-computer interaction. The
vision-based approaches generally consist of two steps: hand
extraction and recognition, and two steps are processed independently.
This paper proposes real-time vision-based Korean finger spelling
recognition system by integrating hand extraction into recognition.
First, we tentatively detect a hand region using CAMShift algorithm.
Then fill factor and aspect ratio estimated by width and height
estimated by CAMShift are used to choose candidate from database,
which can reduce the number of matching in recognition step. To
recognize the finger spelling, we use DTW(dynamic time warping)
based on modified chain codes, to be robust to scale and orientation
variations. In this procedure, since accurate hand regions, without
holes and noises, should be extracted to improve the precision, we use
graph cuts algorithm that globally minimize the energy function
elegantly expressed by Markov random fields (MRFs). In the
experiments, the computational times are less than 130ms, and the
times are not related to the number of templates of finger spellings in
database, as candidate templates are selected in extraction step.
Abstract: This paper focuses on a technique for identifying the geological boundary of the ground strata in front of a tunnel excavation site using the first order adjoint method based on the optimal control theory. The geological boundary is defined as the boundary which is different layers of elastic modulus. At tunnel excavations, it is important to presume the ground situation ahead of the cutting face beforehand. Excavating into weak strata or fault fracture zones may cause extension of the construction work and human suffering. A theory for determining the geological boundary of the ground in a numerical manner is investigated, employing excavating blasts and its vibration waves as the observation references. According to the optimal control theory, the performance function described by the square sum of the residuals between computed and observed velocities is minimized. The boundary layer is determined by minimizing the performance function. The elastic analysis governed by the Navier equation is carried out, assuming the ground as an elastic body with linear viscous damping. To identify the boundary, the gradient of the performance function with respect to the geological boundary can be calculated using the adjoint equation. The weighed gradient method is effectively applied to the minimization algorithm. To solve the governing and adjoint equations, the Galerkin finite element method and the average acceleration method are employed for the spatial and temporal discretizations, respectively. Based on the method presented in this paper, the different boundary of three strata can be identified. For the numerical studies, the Suemune tunnel excavation site is employed. At first, the blasting force is identified in order to perform the accuracy improvement of analysis. We identify the geological boundary after the estimation of blasting force. With this identification procedure, the numerical analysis results which almost correspond with the observation data were provided.
Abstract: Many algorithms are available for sorting the unordered elements. Most important of them are Bubble sort, Heap sort, Insertion sort and Shell sort. These algorithms have their own pros and cons. Shell Sort which is an enhanced version of insertion sort, reduces the number of swaps of the elements being sorted to minimize the complexity and time as compared to insertion sort. Shell sort improves the efficiency of insertion sort by quickly shifting values to their destination. Average sort time is O(n1.25), while worst-case time is O(n1.5). It performs certain iterations. In each iteration it swaps some elements of the array in such a way that in last iteration when the value of h is one, the number of swaps will be reduced. Donald L. Shell invented a formula to calculate the value of ?h?. this work focuses to identify some improvement in the conventional Shell sort algorithm. ''Enhanced Shell Sort algorithm'' is an improvement in the algorithm to calculate the value of 'h'. It has been observed that by applying this algorithm, number of swaps can be reduced up to 60 percent as compared to the existing algorithm. In some other cases this enhancement was found faster than the existing algorithms available.
Abstract: In this study, an optimization of supersonic air-to-air ejector is carried out by a recently developed single-objective genetic algorithm based on adaption of sequence of individuals. Adaptation of sequence is based on Shape-based distance of individuals and embedded micro-genetic algorithm. The optimal sequence found defines the succession of CFD-aimed objective calculation within each generation of regular micro-genetic algorithm. A spring-based deformation mutates the computational grid starting the initial individualvia adapted population in the optimized sequence. Selection of a generation initial individual is knowledge-based. A direct comparison of the newly defined and standard micro-genetic algorithm is carried out for supersonic air-to-air ejector. The only objective is to minimize the loose of total stagnation pressure in the ejector. The result is that sequence-adopted micro-genetic algorithm can provide comparative results to standard algorithm but in significantly lower number of overall CFD iteration steps.
Abstract: Transmission and distribution lines are vital links between the generating unit and consumers. They are exposed to atmosphere, hence chances of occurrence of fault in transmission line is very high which has to be immediately taken care of in order to minimize damage caused by it. In this paper Discrete wavelet transform of voltage signals at the two ends of transmission lines have been analyzed. The transient energy of the detail information of level five is calculated for different fault conditions. It is observed that the variation of transient energy of healthy and faulted line can give important information which can be very useful in classifying and locating the fault.
Abstract: In molecular biology, microarray technology is widely and successfully utilized to efficiently measure gene activity. If working with less studied organisms, methods to design custom-made microarray probes are available. One design criterion is to select probes with minimal melting temperature variances thus ensuring similar hybridization properties. If the microarray application focuses on the investigation of metabolic pathways, it is not necessary to cover the whole genome. It is more efficient to cover each metabolic pathway with a limited number of genes. Firstly, an approach is presented which minimizes the overall melting temperature variance of selected probes for all genes of interest. Secondly, the approach is extended to include the additional constraints of covering all pathways with a limited number of genes while minimizing the overall variance. The new optimization problem is solved by a bottom-up programming approach which reduces the complexity to make it computationally feasible. The new method is exemplary applied for the selection of microarray probes in order to cover all fungal secondary metabolite gene clusters for Aspergillus terreus.
Abstract: The geometric errors in the manufacturing process can
be reduced by optimal positioning of the fixture elements in the
fixture to make the workpiece stiff. We propose a new fixture layout
optimization method N-3-2-1 for large metal sheets in this paper that
combines the genetic algorithm and finite element analysis. The
objective function in this method is to minimize the sum of the nodal
deflection normal to the surface of the workpiece. Two different
kinds of case studies are presented, and optimal position of the
fixturing element is obtained for different cases.
Abstract: The problem of mapping tasks onto a computational grid with the aim to minimize the power consumption and the makespan subject to the constraints of deadlines and architectural requirements is considered in this paper. To solve this problem, we propose a solution from cooperative game theory based on the concept of Nash Bargaining Solution. The proposed game theoretical technique is compared against several traditional techniques. The experimental results show that when the deadline constraints are tight, the proposed technique achieves superior performance and reports competitive performance relative to the optimal solution.
Abstract: The objective of this research is to investigate the
advantages of using large-diameter 0.7 inch prestressing strands in
pretention applications. The advantages of large-diameter strands are
mainly beneficial in the heavy construction applications. Bridges and
tunnels are subjected to a higher daily traffic with an exponential
increase in trucks ultimate weight, which raise the demand for higher
structural capacity of bridges and tunnels. In this research, precast
prestressed I-girders were considered as a case study. Flexure
capacities of girders fabricated using 0.7 inch strands and different
concrete strengths were calculated and compared to capacities of 0.6
inch strands girders fabricated using equivalent concrete strength.
The effect of bridge deck concrete strength on composite deck-girder
section capacity was investigated due to its possible effect on final
section capacity. Finally, a comparison was made to compare the
bridge cross-section of girders designed using regular 0.6 inch strands
and the large-diameter 0.7 inch. The research findings showed that
structural advantages of 0.7 inch strands allow for using fewer bridge
girders, reduced material quantity, and light-weight members. The
structural advantages of 0.7 inch strands are maximized when high
strength concrete (HSC) are used in girder fabrication, and concrete
of minimum 5ksi compressive strength is used in pouring bridge
decks. The use of 0.7 inch strands in bridge industry can partially
contribute to the improvement of bridge conditions, minimize
construction cost, and reduce the construction duration of the project.
Abstract: In this paper, all variables are supposed to be integer
and positive. In this modern method, objective function is assumed to
be maximized or minimized but constraints are always explained like
less or equal to. In this method, choosing a dual combination of ideal
nonequivalent and omitting one of variables. With continuing this
act, finally, having one nonequivalent with (n-m+1) unknown
quantities in which final nonequivalent, m is counter for constraints,
n is counter for variables of decision.
Abstract: The load frequency control problem of power systems has attracted a lot of attention from engineers and researchers over the years. Increasing and quickly changing load demand, coupled with the inclusion of more generators with high variability (solar and wind power generators) on the network are making power systems more difficult to regulate. Frequency changes are unavoidable but regulatory authorities require that these changes remain within a certain bound. Engineers are required to perform the tricky task of adjusting the control system to maintain the frequency within tolerated bounds. It is well known that to minimize frequency variations, a large proportional feedback gain (speed regulation constant) is desirable. However, this improvement in performance using proportional feedback comes about at the expense of a reduced stability margin and also allows some steady-state error. A conventional PI controller is then included as a secondary control loop to drive the steadystate error to zero. In this paper, we propose a robust controller to replace the conventional PI controller which guarantees performance and stability of the power system over the range of variation of the speed regulation constant. Simulation results are shown to validate the superiority of the proposed approach on a simple single-area power system model.
Abstract: Since the 1940s, many promising telepresence
research results have been obtained. However, telepresence
technology still has not reached industrial usage. As human
intelligence is necessary for successful execution of most manual
assembly tasks, the ability of the human is hindered in some cases,
such as the assembly of heavy parts of small/medium lots or
prototypes. In such a case of manual assembly, the help of industrial
robots is mandatory. The telepresence technology can be considered
as a solution for performing assembly tasks, where the human
intelligence and haptic sense are needed to identify and minimize the
errors during an assembly process and a robot is needed to carry
heavy parts. In this paper, preliminary steps to integrate the
telepresence technology into industrial robot systems are introduced.
The system described here combines both, the human haptic sense
and the industrial robot capability to perform a manual assembly task
remotely using a force feedback joystick. Mapping between the
joystick-s Degrees of Freedom (DOF) and the robot-s ones are
introduced. Simulation and experimental results are shown and future
work is discussed.
Abstract: This paper presents the experimental results on effect of applied voltage stress frequency to the occurrence of electrical treeing in 22 kV cross linked polyethylene (XLPE) insulated cable.Hallow disk of XLPE insulating material with thickness 5 mm taken from unused high voltage cable was used as the specimen in this study. Stainless steel needle was inserted gradually into the specimen to give a tip to earth plane electrode separation of 2.50.2 mm at elevated temperature 105-110°C. The specimen was then annealed for 5 minute to minimize any mechanical stress build up around the needle-plane region before it was cooled down to room temperature. Each specimen were subjected to the same applied voltage stress level at 8 kV AC rms, with various frequency, 50, 100, 500, 1000 and 2000 Hz. Initiation time, propagation speed and pattern of electrical treeing were examined in order to study the effect of applied voltage stress frequency. By the experimental results, initial time of visible treeing decreases with increasing in applied voltage frequency. Also, obviously, propagation speed of electrical treeing increases with increasing in applied voltage frequency.Furthermore, two types of electrical treeing, bush-like and branch-like treeing were observed.The experimental results confirmed the effect of voltage stress frequency as well.
Abstract: We consider a two-way relay network where two sources exchange information. A relay helps the two sources exchange information using the decode-and-XOR-forward protocol. We investigate the power minimization problem with minimum rate constraints. The system needs two time slots and in each time slot the required rate pair should be achievable. The power consumption is minimized in each time slot and we obtained the closed form solution. The simulation results confirm that the proposed power allocation scheme consumes lower total power than the conventional schemes.
Abstract: Money laundering has been described by many as the lifeblood of crime and is a major threat to the economic and social well-being of societies. It has been recognized that the banking system has long been the central element of money laundering. This is in part due to the complexity and confidentiality of the banking system itself. It is generally accepted that effective anti-money laundering (AML) measures adopted by banks will make it tougher for criminals to get their "dirty money" into the financial system. In fact, for law enforcement agencies, banks are considered to be an important source of valuable information for the detection of money laundering. However, from the banks- perspective, the main reason for their existence is to make as much profits as possible. Hence their cultural and commercial interests are totally distinct from that of the law enforcement authorities. Undoubtedly, AML laws create a major dilemma for banks as they produce a significant shift in the way banks interact with their customers. Furthermore, the implementation of the laws not only creates significant compliance problems for banks, but also has the potential to adversely affect the operations of banks. As such, it is legitimate to ask whether these laws are effective in preventing money launderers from using banks, or whether they simply put an unreasonable burden on banks and their customers. This paper attempts to address these issues and analyze them against the background of the Malaysian AML laws. It must be said that effective coordination between AML regulator and the banking industry is vital to minimize problems faced by the banks and thereby to ensure effective implementation of the laws in combating money laundering.
Abstract: There are many real world problems in which
parameters like the arrival time of new jobs, failure of resources, and
completion time of jobs change continuously. This paper tackles the
problem of scheduling jobs with random due dates on multiple
identical machines in a stochastic environment. First to assign jobs to
different machine centers LPT scheduling methods have been used,
after that the particular sequence of jobs to be processed on the
machine have been found using simple stochastic techniques. The
performance parameter under consideration has been the maximum
lateness concerning the stochastic due dates which are independent
and exponentially distributed. At the end a relevant problem has been
solved using the techniques in the paper..
Abstract: We study the spatial design of experiment and we want to select a most informative subset, having prespecified size, from a set of correlated random variables. The problem arises in many applied domains, such as meteorology, environmental statistics, and statistical geology. In these applications, observations can be collected at different locations and possibly at different times. In spatial design, when the design region and the set of interest are discrete then the covariance matrix completely describe any objective function and our goal is to choose a feasible design that minimizes the resulting uncertainty. The problem is recast as that of maximizing the determinant of the covariance matrix of the chosen subset. This problem is NP-hard. For using these designs in computer experiments, in many cases, the design space is very large and it's not possible to calculate the exact optimal solution. Heuristic optimization methods can discover efficient experiment designs in situations where traditional designs cannot be applied, exchange methods are ineffective and exact solution not possible. We developed a GA algorithm to take advantage of the exploratory power of this algorithm. The successful application of this method is demonstrated in large design space. We consider a real case of design of experiment. In our problem, design space is very large and for solving the problem, we used proposed GA algorithm.
Abstract: Individually Network reconfiguration or Capacitor control
perform well in minimizing power loss and improving voltage
profile of the distribution system. But for heavy reactive power loads
network reconfiguration and for heavy active power loads capacitor
placement can not effectively reduce power loss and enhance
voltage profiles in the system. In this paper, an hybrid approach
that combine network reconfiguration and capacitor placement using
Harmony Search Algorithm (HSA) is proposed to minimize power
loss reduction and improve voltage profile. The proposed approach
is tested on standard IEEE 33 and 16 bus systems. Computational
results show that the proposed hybrid approach can minimize losses
more efficiently than Network reconfiguration or Capacitor control.
The results of proposed method are also compared with results
obtained by Simulated Annealing (SA). The proposed method has
outperformed in terms of the quality of solution compared to SA.