Abstract: Iris localization is a very important approach in
biometric identification systems. Identification process usually is
implemented in three levels: iris localization, feature extraction, and
pattern matching finally. Accuracy of iris localization as the first step
affects all other levels and this shows the importance of iris
localization in an iris based biometric system. In this paper, we
consider Daugman iris localization method as a standard method,
propose a new method in this field and then analyze and compare the
results of them on a standard set of iris images. The proposed method
is based on the detection of circular edge of iris, and improved by
fuzzy circles and surface energy difference contexts. Implementation
of this method is so easy and compared to the other methods, have a
rather high accuracy and speed. Test results show that the accuracy of
our proposed method is about Daugman method and computation
speed of it is 10 times faster.
Abstract: An efficient and green protocol for the synthesis of α-
aminonitriles derivatives by one-pot reaction of different aldehydes
with amines and trimethylsilyl cyanides has been developed using
natural alumina, alumina sulfuric acid (ASA), nano-γ-alumina, nanoalumina
sulfuric acid (nano-ASA) under microwave irradiation and
solvent-free conditions. The advantages of methods are short reaction
times, high yields, milder conditions and easy work up. The catalysts
can be recovered for the subsequent reactions and reused without any
appreciable loss of efficiency.
Abstract: The rate of production of main products of the Fischer-Tropsch reactions over Fe/HZSM5 bifunctional catalyst in a fixed bed reactor is investigated at a broad range of temperature, pressure, space velocity, H2/CO feed molar ratio and CO2, CH4 and water flow rates. Model discrimination and parameter estimation were performed according to the integral method of kinetic analysis. Due to lack of mechanism development for Fisher – Tropsch Synthesis on bifunctional catalysts, 26 different models were tested and the best model is selected. Comprehensive one and two dimensional heterogeneous reactor models are developed to simulate the performance of fixed-bed Fischer – Tropsch reactors. To reduce computational time for optimization purposes, an Artificial Feed Forward Neural Network (AFFNN) has been used to describe intra particle mass and heat transfer diffusion in the catalyst pellet. It is seen that products' reaction rates have direct relation with H2 partial pressure and reverse relation with CO partial pressure. The results show that the hybrid model has good agreement with rigorous mechanistic model, favoring that the hybrid model is about 25-30 times faster.
Abstract: A novel interpolation scheme to extend usable spectrum
and upconvert in high performance D/A converters is addressed in this
paper. By adjusting the pulse width of cycle and the production circuit
of code, the expansion code is a null code or complementary code that
is interpolation process. What the times and codes of interpolation
decide DAC works in one of a normal mode or multi-mixer mode
so that convert the input digital data signal into normal signal or a
mixed analog signal having a mixer frequency that is higher than the
data frequency. Simulation results show that the novel scheme and
apparatus most extend the usable frequency spectrum into fifth to
sixth Nyquist zone beyond conventional DACs.
Abstract: This paper compares planning results of the electricity and water generation inventory up to year 2030 in the State of
Kuwait. Currently, the generation inventory consists of oil and gas fired technologies only. The planning study considers two main cases. The first case, Reference case, examines a generation inventory based on oil and gas fired generation technologies only.
The second case examines the inclusion of renewables as part of the generation inventory under two scenarios. In the first scenario, Ref-RE, renewable build-out is based on optimum economic performance
of overall generation system. Result shows that the optimum installed
renewable capacity with electric energy generation of 11% . In the second scenario, Ref-RE20, the renewable capacity build-out is
forced to provide 20% of electric energy by 2030. The respective energy systems costs of Reference, Ref-RE and Ref-RE20 case
scenarios reach US dollar 24, 10 and 14 billion annually in 2030.
Abstract: Transmission network expansion planning (TNEP) is a basic part of power system planning that determines where, when and how many new transmission lines should be added to the network. Up till now, various methods have been presented to solve the static transmission network expansion planning (STNEP) problem. But in all of these methods, lines adequacy rate has not been considered at the end of planning horizon, i.e., expanded network misses adequacy after some times and needs to be expanded again. In this paper, expansion planning has been implemented by merging lines loading parameter in the STNEP and inserting investment cost into the fitness function constraints using genetic algorithm. Expanded network will possess a maximum adequacy to provide load demand and also the transmission lines overloaded later. Finally, adequacy index could be defined and used to compare some designs that have different investment costs and adequacy rates. In this paper, the proposed idea has been tested on the Garvers network. The results show that the network will possess maximum efficiency economically.
Abstract: Nowadays predicting political risk level of country
has become a critical issue for investors who intend to achieve
accurate information concerning stability of the business
environments. Since, most of the times investors are layman and
nonprofessional IT personnel; this paper aims to propose a
framework named GECR in order to help nonexpert persons to
discover political risk stability across time based on the political
news and events.
To achieve this goal, the Bayesian Networks approach was
utilized for 186 political news of Pakistan as sample dataset.
Bayesian Networks as an artificial intelligence approach has been
employed in presented framework, since this is a powerful technique
that can be applied to model uncertain domains. The results showed
that our framework along with Bayesian Networks as decision
support tool, predicted the political risk level with a high degree of
accuracy.
Abstract: In a competitive energy market, system reliability
should be maintained at all times. Power system operation being of
online in nature, the energy balance requirements must be satisfied to
ensure reliable operation the system. To achieve this, information
regarding the expected status of the system, the scheduled
transactions and the relevant inputs necessary to make either a
transaction contract or a transmission contract operational, have to be
made available in real time. The real time procedure proposed,
facilitates this. This paper proposes a quadratic curve learning
procedure, which enables a generator-s contribution to the retailer
demand, power loss of transaction in a line at the retail end and its
associated losses for an oncoming operating scenario to be predicted.
Matlab program was used to test in on a 24-bus IEE Reliability Test
System, and the results are found to be acceptable.
Abstract: The leisure boatbuilding industry has tight profit margins that demand that boats are created to a high quality but with low cost. This requirement means reduced design times combined with increased use of design for production can lead to large benefits. The evolutionary nature of the boatbuilding industry can lead to a large usage of previous vessels in new designs. With the increase in automated tools for concurrent engineering within structural design it is important that these tools can reuse this information while subsequently feeding this to designers. The ability to accurately gather this materials and parts data is also a key component to these tools. This paper therefore aims to develop an architecture made up of neural networks and databases to feed information effectively to the designers based on previous design experience.
Abstract: A Finite Volume method based on Characteristic Fluxes for compressible fluids is developed. An explicit cell-centered resolution is adopted, where second and third order accuracy is provided by using two different MUSCL schemes with Minmod, Sweby or Superbee limiters for the hyperbolic part. Few different times integrator is used and be describe in this paper. Resolution is performed on a generic unstructured Cartesian grid, where solid boundaries are handled by a Cut-Cell method. Interfaces are explicitely advected in a non-diffusive way, ensuring local mass conservation. An improved cell cutting has been developed to handle boundaries of arbitrary geometrical complexity. Instead of using a polygon clipping algorithm, we use the Voxel traversal algorithm coupled with a local floodfill scanline to intersect 2D or 3D boundary surface meshes with the fixed Cartesian grid. Small cells stability problem near the boundaries is solved using a fully conservative merging method. Inflow and outflow conditions are also implemented in the model. The solver is validated on 2D academic test cases, such as the flow past a cylinder. The latter test cases are performed both in the frame of the body and in a fixed frame where the body is moving across the mesh. Adaptive Cartesian grid is provided by Paramesh without complex geometries for the moment.
Abstract: The contribution is dealing with the influence of high speed parameters on the quality of machined surface. In general the principle of high speed cutting lies in achieving faster machine times with concurrent increase in accuracy and quality of the machined areas in largely irregular, mathematically hard to define shapes. High speed machining is a highly effective method of machining with the following goals: increasing of machining productivity, increasing of quality of the machined surface, improving of machining economy, improving of ecological aspects of machining. This article is based on an experiment performed by the Department of Machining and Assembly of the Faculty of Mechanical Engineering of VŠBTechnical University of Ostrava.
Abstract: Recently, content delivery services have grown rapidly
over the Internet. For ASPs (Application Service Provider) providing
content delivery services, P2P architecture is beneficial to reduce
outgoing traffic from content servers. On the other hand, ISPs are
suffering from the increase in P2P traffic. The P2P traffic is
unnecessarily redundant because the same content or the same
fractions of content are transferred through an inter-ISP link several
times. Subscriber ISPs have to pay a transit fee to upstream ISPs based
on the volume of inter-ISP traffic. In order to solve such problems,
several works have been done for the purpose of P2P traffic reduction.
However, these existing works cannot control the traffic volume of a
certain link. In order to solve such an ISP-s operational requirement,
we propose a method to control traffic volume for a link within a
preconfigured upper bound value. We evaluated that the proposed
method works well by conducting a simulation on a 1,000-user scale.
We confirm that the traffic volume could be controlled at a lower level
than the upper bound for all evaluated conditions. Moreover, our
method could control the traffic volume at 98.95% link usage against
the target value.
Abstract: In order to monitor for traffic traversal, sensors can be
deployed to perform collaborative target detection. Such a sensor
network achieves a certain level of detection performance with the
associated costs of deployment and routing protocol. This paper
addresses these two points of sensor deployment and routing algorithm
in the situation where the absolute quantity of sensors or total energy
becomes insufficient. This discussion on the best deployment system
concluded that two kinds of deployments; Normal and Power law
distributions, show 6 and 3 times longer than Random distribution in
the duration of coverage, respectively. The other discussion on routing
algorithm to achieve good performance in each deployment system
was also addressed. This discussion concluded that, in place of the
traditional algorithm, a new algorithm can extend the time of coverage
duration by 4 times in a Normal distribution, and in the circumstance
where every deployed sensor operates as a binary model.
Abstract: In the modern manufacturing systems, the use of
thermal cutting techniques using oxyfuel, plasma and laser have
become indispensable for the shape forming of high quality complex
components; however, the conventional chip removal production
techniques still have its widespread space in the manufacturing
industry. Both these types of machining operations require the
positioning of end effector tool at the edge where the cutting process
commences. This repositioning of the cutting tool in every machining
operation is repeated several times and is termed as non-productive
time or airtime motion. Minimization of this non-productive
machining time plays an important role in mass production with high
speed machining. As, the tool moves from one region to the other by
rapid movement and visits a meticulous region once in the whole
operation, hence the non-productive time can be minimized by
synchronizing the tool movements. In this work, this problem is
being formulated as a general travelling salesman problem (TSP) and
a genetic algorithm approach has been applied to solve the same. For
improving the efficiency of the algorithm, the GA has been
hybridized with a noble special heuristic and simulating annealing
(SA). In the present work a novel heuristic in the combination of GA
has been developed for synchronization of toolpath movements
during repositioning of the tool. A comparative analysis of new Meta
heuristic techniques with simple genetic algorithm has been
performed. The proposed metaheuristic approach shows better
performance than simple genetic algorithm for minimization of nonproductive
toolpath length. Also, the results obtained with the help of
hybrid simulated annealing genetic algorithm (HSAGA) are also
found better than the results using simple genetic algorithm only.
Abstract: In this study, a low temperature sensor highly selective to CO in presence of methane is fabricated by using 4 nm SnO2 quantum dots (QDs) prepared by sonication assisted precipitation. SnCl4 aqueous solution was precipitated by ammonia under sonication, which continued for 2 h. A part of the sample was then dried and calcined at 400°C for 1.5 h and characterized by XRD and BET. The average particle size and the specific surface area of the SnO2 QDs as well as their sensing properties were compared with the SnO2 nano-particles which were prepared by conventional sol-gel method. The BET surface area of sonochemically as-prepared product and the one calcined at 400°C after 1.5 hr are 257 m2/gr and 212 m2/gr respectively while the specific surface area for SnO2 nanoparticles prepared by conventional sol-gel method is about 80m2/gr. XRD spectra revealed pure crystalline phase of SnO2 is formed for both as-prepared and calcined samples of SnO2 QDs. However, for the sample prepared by sol-gel method and calcined at 400°C SnO crystals are detected along with those of SnO2. Quantum dots of SnO2 show exceedingly high sensitivity to CO with different concentrations of 100, 300 and 1000 ppm in whole range of temperature (25- 350°C). At 50°C a sensitivity of 27 was obtained for 1000 ppm CO, which increases to a maximum of 147 when the temperature rises to 225°C and then drops off while the maximum sensitivity for the SnO2 sample prepared by the sol-gel method was obtained at 300°C with the amount of 47.2. At the same time no sensitivity to methane is observed in whole range of temperatures for SnO2 QDs. The response and recovery times of the sensor sharply decreases with temperature, while the high selectivity to CO does not deteriorate.
Abstract: The aim of the study was to identify seat belt wearing
factor among road users in Malaysia. Evidence-based approach
through in-depth crash investigation was utilised to determine the
intended objectives. The objective was scoped into crashes
investigated by Malaysian Institute of Road Safety Research
(MIROS) involving passenger vehicles within 2007 and 2010. Crash
information of a total of 99 crash cases involving 240 vehicles and
864 occupants were obtained during the study period. Statistical test
and logistic regression analysis have been performed. Results of the
analysis revealed that gender, seat position and age were associated
with seat belt wearing compliance in Malaysia. Males are 97.6%
more likely to wear seat belt compared to females (95% CI 1.317 to
2.964). By seat position, the finding indicates that frontal occupants
were 82 times more likely to be wearing seat belt (95% CI 30.199 to
225.342) as compared to rear occupants. It is also important to note
that the odds of seat belt wearing increased by about 2.64% (95% CI
1.0176 to 1.0353) for every one year increase in age. This study is
essential in understanding the Malaysian tendency in belting up
while being occupied in a vehicle. The factors highlighted in this
study should be emphasized in road safety education in order to
increase seat belt wearing rate in this country and ultimately in
preventing deaths due to road crashes.
Abstract: A DEA model can generally evaluate the performance
using multiple inputs and outputs for the same period. However, it is
hard to avoid the production lead time phenomenon some times, such
as long-term project or marketing activity. A couple of models have
been suggested to capture this time lag issue in the context of DEA.
This paper develops a dual-MPO model to deal with time lag effect in
evaluating efficiency. A numerical example is also given to show that
the proposed model can be used to get efficiency and reference set of
inefficient DMUs and to obtain projected target value of input
attributes for inefficient DMUs to be efficient.
Abstract: The increasing competitiveness in manufacturing
industry is forcing manufacturers to seek effective processing
schedules. The paper presents an optimization manufacture
scheduling approach for dependent details processing with given
processing sequences and times on multiple machines. By defining
decision variables as start and end moments of details processing it is
possible to use straightforward variables restrictions to satisfy
different technological requirements and to formulate easy to
understand and solve optimization tasks for multiple numbers of
details and machines. A case study example is solved for seven base
moldings for CNC metalworking machines processed on five
different machines with given processing order among details and
machines and known processing time-s duration. As a result of linear
optimization task solution the optimal manufacturing schedule
minimizing the overall processing time is obtained. The
manufacturing schedule defines the moments of moldings delivery
thus minimizing storage costs and provides mounting due-time
satisfaction. The proposed optimization approach is based on real
manufacturing plant problem. Different processing schedules variants
for different technological restrictions were defined and implemented
in the practice of Bulgarian company RAIS Ltd. The proposed
approach could be generalized for other job shop scheduling
problems for different applications.
Abstract: This present paper proposes the modified Elastic Strip
method for mobile robot to avoid obstacles with a real time system in
an uncertain environment. The method deals with the problem of
robot in driving from an initial position to a target position based on
elastic force and potential field force. To avoid the obstacles, the
robot has to modify the trajectory based on signal received from the
sensor system in the sampling times. It was evident that with the
combination of Modification Elastic strip and Pseudomedian filter to
process the nonlinear data from sensor uncertainties in the data
received from the sensor system can be reduced. The simulations and
experiments of these methods were carried out.
Abstract: Undular hydraulic jumps are illustrated by a smooth
rise of the free surface followed by a train of stationary waves. They
are sometimes experienced in natural waterways and rivers. The
characteristics of undular hydraulic jumps are studied here. The
height, amplitude and the main characteristics of undular jump is
depended on the upstream Froude number and aspect ratio. The
experiments were done on the smooth bed flume. These results
compared with other researches and the main characteristics of the
undular hydraulic jump were studied in this article.