Abstract: KREISIG is a computer simulation program, firstly developed by Munawar (1994) in Germany to optimize signalized roundabout. The traffic movement is based on the car following theory. Turbine method has been implemented for signal setting. The program has then been further developed in Indonesia to meet the traffic characteristics in Indonesia by adjusting the sensitivity of the drivers. Trial and error method has been implemented to adjust the saturation flow. The saturation flow output has also been compared to the calculation method according to 1997 Indonesian Highway Capacity Manual. It has then been implemented to optimize signalized roundabout at Kleringan roundabout in Malioboro area, Yogyakarta, Indonesia. It is found that this method can optimize the signal setting of this roundabout. Therefore, it is recommended to use this program to optimize signalized roundabout.
Abstract: E-Appointment Scheduling (EAS) has been developed
to handle appointment for UMP students, lecturers in Faculty of
Computer Systems & Software Engineering (FCSSE) and Student
Medical Center. The schedules are based on the timetable and
university activities. Constraints Logic Programming (CLP) has been
implemented to solve the scheduling problems by giving
recommendation to the users in part of determining any available
slots from the lecturers and doctors- timetable. By using this system,
we can avoid wasting time and cost because this application will set
an appointment by auto-generated. In addition, this system can be an
alternative to the lecturers and doctors to make decisions whether to
approve or reject the appointments.
Abstract: The paper presents a compressor anti-surge control
system, that results in maximizing compressor throughput with
pressure standard deviation reduction, increased safety margin
between design point and surge limit line and avoiding possible
machine surge. Alternative control strategies are presented.
Abstract: Recently, as information industry and mobile
communication technology are developing, this study is conducted on
the new concept of intelligent structures and maintenance techniques
that applied wireless sensor network, USN (Ubiquitous Sensor
Network), to social infrastructures such as civil and architectural
structures on the basis of the concept of Ubiquitous Computing that
invisibly provides human life with computing, along with mutually
cooperating, compromising and connecting networks each other by
having computers within all objects around us.
Therefore, the purpose of this study is to investigate the capability
of wireless communication of sensor node embedded in reinforced
concrete structure with a basic experiment on an electric wave
permeability of sensor node by fabricating molding with variables of
concrete thickness and steel bars that are mostly used in constructing
structures to determine the feasibility of application to constructing
structures with USN.
At this time, with putting the pitches of steel bars, the thickness of
concrete placed, and the intensity of RF signal of a
transmitter-receiver as variables and when wireless communication
module was installed inside, the possible communication distance of
plain concrete and the possible communication distance by the pitches
of steel bars was measured in the horizontal and vertical direction
respectively. Besides, for the precise measurement of diminution of an
electric wave, the magnitude of an electric wave in the range of used
frequencies was measured by using Spectrum Analyzer. The
phenomenon of diminution of an electric wave was numerically
analyzed and the effect of the length of wavelength of frequencies was
analyzed by the properties of a frequency band area.
As a result of studying the feasibility of an application to
constructing structures with wireless sensor, in case of plain concrete,
it shows 45cm for the depth of permeability and in case of reinforced
concrete with the pitches of 5cm, it shows 37cm and 45cm for the
pitches of 15cm.
Abstract: In this study, a new and fast algorithm for Ascending
Aorta (AscA) and Descending Aorta (DesA) segmentation is
presented using Computed Tomography Angiography images. This
process is quite important especially at the detection of aortic
plaques, aneurysms, calcification or stenosis. The applied method has
been carried out at four steps. At first step, lung segmentation is
achieved. At the second one, Mediastinum Region (MR) is detected
to use in the segmentation. At the third one, images have been
applied optimal threshold and components which are outside of the
MR were removed. Lastly, identifying and segmentation of AscA and
DesA have been carried out. The performance of the applied method
is found quite well for radiologists and it gives enough results to the
surgeries medically.
Abstract: This paper proposes the numerical simulation of the
investment casting of gold jewelry. It aims to study the behavior of
fluid flow during mould filling and solidification and to optimize the
process parameters, which lead to predict and control casting defects
such as gas porosity and shrinkage porosity. A finite difference
method, computer simulation software FLOW-3D was used to
simulate the jewelry casting process. The simplified model was
designed for both numerical simulation and real casting production.
A set of sensor acquisitions were allocated on the different positions
of the wax tree of the model to detect filling times, while a set of
thermocouples were allocated to detect the temperature during
casting and cooling. Those detected data were applied to validate the
results of the numerical simulation to the results of the real casting.
The resulting comparisons signify that the numerical simulation can
be used as an effective tool in investment-casting-process
optimization and casting-defect prediction.
Abstract: The paper proposes a way of parallel processing of
SURF and Optical Flow for moving object recognition and tracking.
The object recognition and tracking is one of the most important task
in computer vision, however disadvantage are many operations cause
processing speed slower so that it can-t do real-time object recognition
and tracking. The proposed method uses a typical way of feature
extraction SURF and moving object Optical Flow for reduce
disadvantage and real-time moving object recognition and tracking,
and parallel processing techniques for speed improvement. First
analyse that an image from DB and acquired through the camera using
SURF for compared to the same object recognition then set ROI
(Region of Interest) for tracking movement of feature points using
Optical Flow. Secondly, using Multi-Thread is for improved
processing speed and recognition by parallel processing. Finally,
performance is evaluated and verified efficiency of algorithm
throughout the experiment.
Abstract: Over 90% of the world trade is carried by the
international shipping industry. As most of the countries are
developing, seaborne trade continues to expand to bring benefits for
consumers across the world. Studies show that world trade will
increase 70-80% through shipping in the next 15-20 years. Present
global fleet of 70000 commercial ships consumes approximately 200
million tonnes of diesel fuel a year and it is expected that it will be
around 350 million tonnes a year by 2020. It will increase the
demand for fuel and also increase the concentration of CO2 in the
atmosphere. So, it-s essential to control this massive fuel
consumption and CO2 emission. The idea is to utilize a diesel-wind
hybrid system for ship propulsion. Use of wind energy by installing
modern wing-sails in ships can drastically reduce the consumption of
diesel fuel. A huge amount of wind energy is available in oceans.
Whenever wind is available the wing-sails would be deployed and
the diesel engine would be throttled down and still the same forward
speed would be maintained. Wind direction in a particular shipping
route is not same throughout; it changes depending upon the global
wind pattern which depends on the latitude. So, the wing-sail
orientation should be such that it optimizes the use of wind energy.
We have made a computer programme in which by feeding the data
regarding wind velocity, wind direction, ship-motion direction; we
can find out the best wing-sail position and fuel saving for
commercial ships. We have calculated net fuel saving in certain
international shipping routes, for instance, from Mumbai in India to
Durban in South Africa. Our estimates show that about 8.3% diesel
fuel can be saved by utilizing the wind. We are also developing an
experimental model of the ship employing airfoils (small scale wingsail)
and going to test it in National Wind Tunnel Facility in IIT
Kanpur in order to develop a control mechanism for a system of
airfoils.
Abstract: The Integrated Performance Modelling Environment
(IPME) is a powerful simulation engine for task simulation and
performance analysis. However, it has no high level cognition such
as memory and reasoning for complex simulation. This article
introduces a knowledge representation and reasoning scheme that can
accommodate uncertainty in simulations of military personnel with
IPME. This approach demonstrates how advanced reasoning models
that support similarity-based associative process, rule-based abstract
process, multiple reasoning methods and real-time interaction can be
integrated with conventional task network modelling to provide
greater functionality and flexibility when modelling operator
performance.
Abstract: Some meta-schedulers query the information system of individual supercomputers in order to submit jobs to the least busy supercomputer on a computational Grid. However, this information can become outdated by the time a job starts due to changes in scheduling priorities. The MSR scheme is based on Multiple Simultaneous Requests and can take advantage of opportunities resulting from these priorities changes. This paper presents the SWARM meta-scheduler, which can speed up the execution of large sets of tasks by minimizing the job queuing time through the submission of multiple requests. Performance tests have shown that this new meta-scheduler is faster than an implementation of the MSR scheme and the gLite meta-scheduler. SWARM has been used through the GridQTL project beta-testing portal during the past year. Statistics are provided for this usage and demonstrate its capacity to achieve reliably a substantial reduction of the execution time in production conditions.
Abstract: Computers are being integrated in the various aspects
of human every day life in different shapes and abilities. This fact
has intensified a requirement for the software development
technologies which is ability to be: 1) portable, 2) adaptable, and 3)
simple to develop. This problem is also known as the Pervasive
Computing Problem (PCP) which can be implemented in different
ways, each has its own pros and cons and Context Oriented
Programming (COP) is one of the methods to address the PCP.
In this paper a design for a COP framework, a context aware
framework, is presented which has eliminated weak points of a
previous design based on interpreter languages, while introducing the
compiler languages power in implementing these frameworks.
The key point of this improvement is combining COP and
Dependency Injection (DI) techniques. Both old and new frameworks
are analyzed to show advantages and disadvantages. Finally a
simulation of both designs is proposed to indicating that the practical
results agree with the theoretical analysis while the new design runs
almost 8 times faster.
Abstract: In this paper, we propose a single sample path based
algorithm with state aggregation to optimize the average rewards of
singularly perturbed Markov reward processes (SPMRPs) with a
large scale state spaces. It is assumed that such a reward process
depend on a set of parameters. Differing from the other kinds of
Markov chain, SPMRPs have their own hierarchical structure. Based
on this special structure, our algorithm can alleviate the load in the
optimization for performance. Moreover, our method can be applied
on line because of its evolution with the sample path simulated.
Compared with the original algorithm applied on these problems of
general MRPs, a new gradient formula for average reward
performance metric in SPMRPs is brought in, which will be proved
in Appendix, and then based on these gradients, the schedule of the
iteration algorithm is presented, which is based on a single sample
path, and eventually a special case in which parameters only
dominate the disturbance matrices will be analyzed, and a precise
comparison with be displayed between our algorithm with the old
ones which is aim to solve these problems in general Markov reward
processes. When applied in SPMRPs, our method will approach a fast
pace in these cases. Furthermore, to illustrate the practical value of
SPMRPs, a simple example in multiple programming in computer
systems will be listed and simulated. Corresponding to some practical
model, physical meanings of SPMRPs in networks of queues will be
clarified.
Abstract: Intravitreal injection (IVI) is the most common treatment for eye posterior segment diseases such as endopthalmitis, retinitis, age-related macular degeneration, diabetic retinopathy, uveitis, and retinal detachment. Most of the drugs used to treat vitreoretinal diseases, have a narrow concentration range in which they are effective, and may be toxic at higher concentrations. Therefore, it is critical to know the drug distribution within the eye following intravitreal injection. Having knowledge of drug distribution, ophthalmologists can decide on drug injection frequency while minimizing damage to tissues. The goal of this study was to develop a computer model to predict intraocular concentrations and pharmacokinetics of intravitreally injected drugs. A finite volume model was created to predict distribution of two drugs with different physiochemical properties in the rabbit eye. The model parameters were obtained from literature review. To validate this numeric model, the in vivo data of spatial concentration profile from the lens to the retina were compared with the numeric data. The difference was less than 5% between the numerical and experimental data. This validation provides strong support for the numerical methodology and associated assumptions of the current study.
Abstract: Deformable active contours are widely used in
computer vision and image processing applications for image
segmentation, especially in biomedical image analysis. The active
contour or “snake" deforms towards a target object by controlling the
internal, image and constraint forces. However, if the contour
initialized with a lesser number of control points, there is a high
probability of surpassing the sharp corners of the object during
deformation of the contour. In this paper, a new technique is
proposed to construct the initial contour by incorporating prior
knowledge of significant corners of the object detected using the
Harris operator. This new reconstructed contour begins to deform, by
attracting the snake towards the targeted object, without missing the
corners. Experimental results with several synthetic images show the
ability of the new technique to deal with sharp corners with a high
accuracy than traditional methods.
Abstract: A finite element analysis (FEA) computer software HyperWorks is utilized in re-designing an automotive component to reduce its mass. Reduction of components mass contributes towards environmental sustainability by saving world-s valuable metal resources and by reducing carbon emission through improved overall vehicle fuel efficiency. A shape optimization analysis was performed on a rear spindle component. Pre-processing and solving procedures were performed using HyperMesh and RADIOSS respectively. Shape variables were defined using HyperMorph. Then optimization solver OptiStruct was utilized with fatigue life set as a design constraint. Since Stress-Number of Cycle (S-N) theory deals with uni-axial stress, the Signed von Misses stress on the component was used for looking up damage on S-N curve, and Gerber criterion for mean stress corrections. The optimization analysis resulted in mass reduction of 24% of the original mass. The study proved that the adopted approach has high potential use for environmental sustainability.
Abstract: This paper explores how Critical Systems Thinking and Action Research can be used to improve student performance in Networking. When describing a system from a systems thinking perspective, the following aspects can be identified: the total system performance, the systems environment, the resources, the components and the management of the system. Following the history of system thinking we observe three emerged methodologies namely, hard systems, soft systems, and critical systems. This paper uses Critical Systems Thinking (CST) which describes systems in terms of contradictions and conflict. It demonstrates how CST can be used in an Action Research (AR) project to improve the performance of students. Intervention in terms of student assessment is discussed and the impact of the intervention is discussed.
Abstract: Knowing the geometrical object pose of products in manufacturing line before robot manipulation is required and less time consuming for overall shape measurement. In order to perform it, the information of shape representation and matching of objects is become required. Objects are compared with its descriptor that conceptually subtracted from each other to form scalar metric. When the metric value is smaller, the object is considered closed to each other. Rotating the object from static pose in some direction introduce the change of value in scalar metric value of boundary information after feature extraction of related object. In this paper, a proposal method for indexing technique for retrieval of 3D geometrical models based on similarity between boundaries shapes in order to measure 3D CAD object pose using object shape feature matching for Computer Aided Testing (CAT) system in production line is proposed. In experimental results shows the effectiveness of proposed method.
Abstract: Fair share objective has been included into the goaloriented
parallel computer job scheduling policy recently. However,
the previous work only presented the overall scheduling performance.
Thus, the per-user performance of the policy is still lacking. In this
work, the details of per-user fair share performance under the
Tradeoff-fs(Tx:avgX) policy will be further evaluated. A basic fair
share priority backfill policy namely RelShare(1d) is also studied.
The performance of all policies is collected using an event-driven
simulator with three real job traces as input. The experimental results
show that the high demand users are usually benefited under most
policies because their jobs are large or they have a lot of jobs. In the
large job case, one job executed may result in over-share during that
period. In the other case, the jobs may be backfilled for
performances. However, the users with a mixture of jobs may suffer
because if the smaller jobs are executing the priority of the remaining
jobs from the same user will be lower. Further analysis does not show
any significant impact of users with a lot of jobs or users with a large
runtime approximation error.
Abstract: In this study a clustering technique has been implemented which is K-Means like with hierarchical initial set (HKM). The goal of this study is to prove that clustering document sets do enhancement precision on information retrieval systems, since it was proved by Bellot & El-Beze on French language. A comparison is made between the traditional information retrieval system and the clustered one. Also the effect of increasing number of clusters on precision is studied. The indexing technique is Term Frequency * Inverse Document Frequency (TF * IDF). It has been found that the effect of Hierarchical K-Means Like clustering (HKM) with 3 clusters over 242 Arabic abstract documents from the Saudi Arabian National Computer Conference has significant results compared with traditional information retrieval system without clustering. Additionally it has been found that it is not necessary to increase the number of clusters to improve precision more.