Abstract: Pharmacology curriculum plays an integral role in
medical education. Learning pharmacology to choose and prescribe
drugs is a major challenge encountered by students. We developed
pharmacology applied learning activities for first year medical
students that included realistic clinical situations with escalating
complications which required the students to analyze the situation
and think critically to choose a safe drug. Tutor feedback was
provided at the end of session. Evaluation was done to assess the
students- level of interest and usefulness of the sessions in rational
selection of drugs. Majority (98 %) of the students agreed that the
session was an extremely useful learning exercise and agreed that
similar sessions would help in rational selection of drugs. Applied
learning sessions in the early years of medical program may promote
deep learning and bridge the gap between pharmacology theory and
clinical practice. Besides, it may also enhance safe prescribing skills.
Abstract: The paper presents coupled electromagnetic and
thermal field analysis of busbar system (of rectangular cross-section
geometry) submitted to short circuit conditions. The laboratory model
was validated against both analytical solution and experimental
observations. The considered problem required the computation of
the detailed distribution of the power losses and the heat transfer
modes. In this electromagnetic and thermal analysis, different
definitions of electric busbar heating were considered and compared.
The busbar system is a three phase one and consists of aluminum,
painted aluminum and copper busbar. The solution to the coupled
field problem is obtained using the finite element method and the
QuickField™ program. Experiments have been carried out using two
different approaches and compared with computed results.
Abstract: The incidence of oral cancer in Taiwan increased year
by year. It replaced the nasopharyngeal as the top incurrence among
head and neck cancers since 1994. Early examination and earlier
identification for earlier treatment is the most effective medical
treatment for these cancers. Although the government fully subsidized
the expenses with tremendous promotion program for oral cancer
screening, the citizen-s participation remained low. Purpose of this
study is to understand the factors affecting the citizens- behavior
intensions of taking an oral cancer screening. Based on the Theory of
Planned Behavior, this study adopted four distinctive variables in
explaining the captioned behavior intentions.700 questionnaires were
dispatched with 500 valid responses or 71.4% returned by the citizens
with an age 30 or above from the eastern counties of Taiwan. Test
results has shown that attitude toward, subjective norms of, and
perceived behavioral control over the oral cancer screening varied
from some demographic factors to another. The study proofed that
attitude toward, subjective norms of, and perceived behavioral control
over the oral cancer screening had positive impacts on the
corresponding behavior intention. The test concluded that the theory
of planned behavior was appropriate as a theoretical framework in
explaining the influencing factors of intentions of taking oral cancer
screening. This study suggested the healthcare professional should
provide high accessibility of screening services other than just
delivering knowledge on oral cancer to promote the citizens-
intentions of taking the captioned screening. This research also
provided a practical implication to the healthcare professionals when
formulating and implementing promotion instruments for lifting the
screening rate of oral cancer.
Abstract: Computation of facility location problem for every
location in the country is not easy simultaneously. Solving the
problem is described by using cluster computing. A technique is to
design parallel algorithm by using local search with single swap
method in order to solve that problem on clusters. Parallel
implementation is done by the use of portable parallel programming,
Message Passing Interface (MPI), on Microsoft Windows Compute
Cluster. In this paper, it presents the algorithm that used local search
with single swap method and implementation of the system of a
facility to be opened by using MPI on cluster. If large datasets are
considered, the process of calculating a reasonable cost for a facility
becomes time consuming. The result shows parallel computation of
facility location problem on cluster speedups and scales well as
problem size increases.
Abstract: Cognitive Science appeared about 40 years ago,
subsequent to the challenge of the Artificial Intelligence, as common
territory for several scientific disciplines such as: IT, mathematics,
psychology, neurology, philosophy, sociology, and linguistics. The
new born science was justified by the complexity of the problems
related to the human knowledge on one hand, and on the other by the
fact that none of the above mentioned sciences could explain alone
the mental phenomena. Based on the data supplied by the
experimental sciences such as psychology or neurology, models of
the human mind operation are built in the cognition science. These
models are implemented in computer programs and/or electronic
circuits (specific to the artificial intelligence) – cognitive systems –
whose competences and performances are compared to the human
ones, leading to the psychology and neurology data reinterpretation,
respectively to the construction of new models. During these
processes if psychology provides the experimental basis, philosophy
and mathematics provides the abstraction level utterly necessary for
the intermission of the mentioned sciences.
The ongoing general problematic of the cognitive approach
provides two important types of approach: the computational one,
starting from the idea that the mental phenomenon can be reduced to
1 and 0 type calculus operations, and the connection one that
considers the thinking products as being a result of the interaction
between all the composing (included) systems. In the field of
psychology measurements in the computational register use classical
inquiries and psychometrical tests, generally based on calculus
methods. Deeming things from both sides that are representing the
cognitive science, we can notice a gap in psychological product
measurement possibilities, regarded from the connectionist
perspective, that requires the unitary understanding of the quality –
quantity whole. In such approach measurement by calculus proves to
be inefficient. Our researches, deployed for longer than 20 years,
lead to the conclusion that measuring by forms properly fits to the
connectionism laws and principles.
Abstract: The ever-growing usage of aspect-oriented
development methodology in the field of software engineering
requires tool support for both research environments and industry. So
far, tool support for many activities in aspect-oriented software
development has been proposed, to automate and facilitate their
development. For instance, the AJaTS provides a transformation
system to support aspect-oriented development and refactoring. In
particular, it is well established that the abstract interpretation of
programs, in any paradigm, pursued in static analysis is best served
by a high-level programs representation, such as Control Flow Graph
(CFG). This is why such analysis can more easily locate common
programmatic idioms for which helpful transformation are already
known as well as, association between the input program and
intermediate representation can be more closely maintained.
However, although the current researches define the good concepts
and foundations, to some extent, for control flow analysis of aspectoriented
programs but they do not provide a concrete tool that can
solely construct the CFG of these programs. Furthermore, most of
these works focus on addressing the other issues regarding Aspect-
Oriented Software Development (AOSD) such as testing or data flow
analysis rather than CFG itself. Therefore, this study is dedicated to
build an aspect-oriented control flow graph construction tool called
AJcFgraph Builder. The given tool can be applied in many software
engineering tasks in the context of AOSD such as, software testing,
software metrics, and so forth.
Abstract: 2007 is a jubilee year: in 1967, programming language SIMULA 67 was presented, which contained all aspects of what was later called object-oriented programming. The present paper contains a description of the development unto the objectoriented programming, the role of simulation in this development and other tools that appeared in SIMULA 67 and that are nowadays called super-object-oriented programming.
Abstract: Evolutionary Programming (EP) represents a
methodology of Evolutionary Algorithms (EA) in which mutation is
considered as a main reproduction operator. This paper presents a
novel EP approach for Artificial Neural Networks (ANN) learning.
The proposed strategy consists of two components: the self-adaptive,
which contains phenotype information and the dynamic, which is
described by genotype. Self-adaptation is achieved by the addition of
a value, called the network weight, which depends on a total number
of hidden layers and an average number of neurons in hidden layers.
The dynamic component changes its value depending on the fitness
of a chromosome, exposed to mutation. Thus, the mutation step size
is controlled by two components, encapsulated in the algorithm,
which adjust it according to the characteristics of a predefined ANN
architecture and the fitness of a particular chromosome. The
comparative analysis of the proposed approach and the classical EP
(Gaussian mutation) showed, that that the significant acceleration of
the evolution process is achieved by using both phenotype and
genotype information in the mutation strategy.
Abstract: Human-related information security breaches within organizations are primarily caused by employees who have not been made aware of the importance of protecting the information they work with. Information security awareness is accordingly attracting more attention from industry, because stakeholders are held accountable for the information with which they work. The authors developed an Information Security Retrieval and Awareness model – entitled “ISRA" – that is tailored specifically towards enhancing information security awareness in industry amongst all users of information, to address shortcomings in existing information security awareness models. This paper is principally aimed at expounding a prototype for the ISRA model to highlight the advantages of utilizing the model. The prototype will focus on the non-technical, humanrelated information security issues in industry. The prototype will ensure that all stakeholders in an organization are part of an information security awareness process, and that these stakeholders are able to retrieve specific information related to information security issues relevant to their job category, preventing them from being overburdened with redundant information.
Abstract: This work concerns the topological optimization
problem for determining the optimal petroleum refinery
configuration. We are interested in further investigating and
hopefully advancing the existing optimization approaches and
strategies employing logic propositions to conceptual process
synthesis problems. In particular, we seek to contribute to this
increasingly exciting area of chemical process modeling by
addressing the following potentially important issues: (a) how the
formulation of design specifications in a mixed-logical-and-integer
optimization model can be employed in a synthesis problem to enrich
the problem representation by incorporating past design experience,
engineering knowledge, and heuristics; and (b) how structural
specifications on the interconnectivity relationships by space (states)
and by function (tasks) in a superstructure should be properly
formulated within a mixed-integer linear programming (MILP)
model. The proposed modeling technique is illustrated on a case
study involving the alternative processing routes of naphtha, in which
significant improvement in the solution quality is obtained.
Abstract: Air quality studies were carried out in the towns of
Putrajaya, Petaling Jaya and Nilai in the Malaysian Peninsular. In this
study, the variations of Ozone (O3) concentrations over a four year
period (2008-2011) were investigated using data obtained from the
Malaysian Department of the Environment (DOE). This study aims to
identify and describe the daily and monthly variations of O3
concentrations at the monitoring sites mentioned. The SPPS program
(Statistical Package for the Social Science) was used to analyze this
data in order to obtain the variations of O3 and also to clarify the
relationship between the stations. The findings of the study revealed
that the highest concentration of O3 occurred during the midday and
afternoon (between 13:00-15:00 hrs). The comparison between
stations also showed that highest O3 concentrations were recorded in
Putrajaya. The comparisons of average and maximum concentrations
of O3 for the three stations showed that the strongest significant
correlation was recorded in the Petaling Jaya station with the value
R2= 0.667. Results from this study indicate that in the urban areas of
Peninsular Malaysia, the concentration of O3 depends on the
concentration of NOx. Furthermore, HYSPLIT back trajectories
(-72h) indicated that air-mass transport patterns can also influence the
O3 concentration in the areas studied.
Abstract: Design and land use are closely linked to the
energy efficiency levels for an urban area. The current city
planning practice does not involve an effective land useenergy
evaluation in its 'blueprint' urban plans. The study
proposed an appraisal method that can be embedded in GIS
programs using five planning criteria as how far a planner can
give away from the planning principles (criteria) for the most
energy output s/he can obtain. The case of Balcova, a district
in the Izmir Metropolitan area, is used conformingly for
evaluating the proposed master plan and the geothermal
energy (heating only) use for the concern district.
If the land use design were proposed accordingly at-most
energy efficiency (a 30% obtained), mainly increasing the
density around the geothermal wells and also proposing more
mixed use zones, we could have 17% distortion (infidelity to
the main planning principles) from the original plan. The
proposed method can be an effective tool for planners as
simulation media, of which calculations can be made by GIS
ready tools, to evaluate efficiency levels for different plan
proposals, letting to know how much energy saving causes
how much deviation from the other planning ideals. Lower
energy uses can be possible for different land use proposals
for various policy trials.
Abstract: The increasing competitiveness in manufacturing
industry is forcing manufacturers to seek effective processing
schedules. The paper presents an optimization manufacture
scheduling approach for dependent details processing with given
processing sequences and times on multiple machines. By defining
decision variables as start and end moments of details processing it is
possible to use straightforward variables restrictions to satisfy
different technological requirements and to formulate easy to
understand and solve optimization tasks for multiple numbers of
details and machines. A case study example is solved for seven base
moldings for CNC metalworking machines processed on five
different machines with given processing order among details and
machines and known processing time-s duration. As a result of linear
optimization task solution the optimal manufacturing schedule
minimizing the overall processing time is obtained. The
manufacturing schedule defines the moments of moldings delivery
thus minimizing storage costs and provides mounting due-time
satisfaction. The proposed optimization approach is based on real
manufacturing plant problem. Different processing schedules variants
for different technological restrictions were defined and implemented
in the practice of Bulgarian company RAIS Ltd. The proposed
approach could be generalized for other job shop scheduling
problems for different applications.
Abstract: Whereas in the third generation nuclear reactors,
dimensions of core and also the kind of coolant and enrichment
percent of fuel have significantly changed than the second
generation, therefore in this article the aim is based on a
comparative investigation between two same power reactors of
second and third generations, that the neutronic parameters of both
reactors such as: K∞, Keff and its details and thermal hydraulic
parameters such as: power density, specific power, volumetric heat
rate, released power per fuel volume unit, volume and mass of clad
and fuel (consisting fissile and fertile fuels), be calculated and
compared together. By this comparing the efficiency and
modification of third generation nuclear reactors than second
generation which have same power can be distinguished.
In order to calculate the cited parameters, some information
such as: core dimensions, the pitch of lattice, the fuel matter, the
percent of enrichment and the kind of coolant are used. For
calculating the neutronic parameters, a neutronic program entitled:
SIXFAC and also related formulas have been used. Meantime for
calculating the thermal hydraulic and other parameters, analytical
method and related formulas have been applied.
Abstract: In real-time networks a large number of application programs are relying on video data and heterogeneous data transmission techniques. The aim of this research is presenting a method for end-to-end vouch quality service in surface applicationlayer for sending video data in comparison form in wireless heterogeneous networks. This method tries to improve the video sending over the wireless heterogeneous networks with used techniques in surface layer, link and application. The offered method is showing a considerable improvement in quality observing by user. In addition to this, other specifications such as shortage of data load that had require to resending and limited the relation period length to require time for second data sending, help to be used the offered method in the wireless devices that have a limited energy. The presented method and the achieved improvement is simulated and presented in the NS-2 software.
Abstract: This paper presents a heuristic to solve large size 0-1 Multi constrained Knapsack problem (01MKP) which is NP-hard. Many researchers are used heuristic operator to identify the redundant constraints of Linear Programming Problem before applying the regular procedure to solve it. We use the intercept matrix to identify the zero valued variables of 01MKP which is known as redundant variables. In this heuristic, first the dominance property of the intercept matrix of constraints is exploited to reduce the search space to find the optimal or near optimal solutions of 01MKP, second, we improve the solution by using the pseudo-utility ratio based on surrogate constraint of 01MKP. This heuristic is tested for benchmark problems of sizes upto 2500, taken from literature and the results are compared with optimum solutions. Space and computational complexity of solving 01MKP using this approach are also presented. The encouraging results especially for relatively large size test problems indicate that this heuristic can successfully be used for finding good solutions for highly constrained NP-hard problems.
Abstract: Concrete performance is strongly affected by the
particle packing degree since it determines the distribution of the
cementitious component and the interaction of mineral particles. By
using packing theory designers will be able to select optimal
aggregate materials for preparing concrete with low cement content,
which is beneficial from the point of cost. Optimum particle packing
implies minimizing porosity and thereby reducing the amount of
cement paste needed to fill the voids between the aggregate particles,
taking also the rheology of the concrete into consideration. For
reaching good fluidity superplasticizers are required. The results from
pilot tests at Luleå University of Technology (LTU) show various
forms of the proposed theoretical models, and the empirical approach
taken in the study seems to provide a safer basis for developing new,
improved packing models.
Abstract: The selection of parents and breeding strategies for
the successful maize hybrid production will be facilitated by
heterotic groupings of parental lines and determination of combining
abilities of them. Fourteen maize inbred lines, used in maize breeding
programs in Iran, were crossed in a diallel mating design. The 91 F1
hybrids and the 14 parental lines were studied during two years at
four locations of Iran for investigation of combining ability of
gentypes for grain yield and to determine heterotic patterns among
germplasm sources, using both, the Griffing-s method and the biplot
approach for diallel analysis. The graphical representation offered by
biplot analysis allowed a rapid and effective overview of general
combining ability (GCA) and specific combining ability (SCA)
effects of the inbred lines, their performance in crosses, as well as
grouping patterns of similar genotypes. GCA and SCA effects were
significant for grain yield (GY). Based on significant positive GCA
effects, the lines derived from LSC could be used as parent in crosses
to increase GY. The maximum best- parent heterosis values and
highest SCA effects resulted from crosses B73 × MO17 and A679 ×
MO17 for GY. The best heterotic patterns were LSC × RYD, which
would be potentially useful in maize breeding programs to obtain
high-yielding hybrids in the same climate of Iran.
Abstract: With the rapid development in the field of life
sciences and the flooding of genomic information, the need for faster
and scalable searching methods has become urgent. One of the
approaches that were investigated is indexing. The indexing methods
have been categorized into three categories which are the lengthbased
index algorithms, transformation-based algorithms and mixed
techniques-based algorithms. In this research, we focused on the
transformation based methods. We embedded the N-gram method
into the transformation-based method to build an inverted index
table. We then applied the parallel methods to speed up the index
building time and to reduce the overall retrieval time when querying
the genomic database. Our experiments show that the use of N-Gram
transformation algorithm is an economical solution; it saves time and
space too. The result shows that the size of the index is smaller than
the size of the dataset when the size of N-Gram is 5 and 6. The
parallel N-Gram transformation algorithm-s results indicate that the
uses of parallel programming with large dataset are promising which
can be improved further.
Abstract: This study was conducted to explore the effects of two
countries model comparison program in Taiwan and Singapore in
TIMSS database. The researchers used Multi-Group Hierarchical
Linear Modeling techniques to compare the effects of two different
country models and we tested our hypotheses on 4,046 Taiwan
students and 4,599 Singapore students in 2007 at two levels: the class
level and student (individual) level. Design quality is a class level
variable. Student level variables are achievement and self-confidence.
The results challenge the widely held view that retention has a positive
impact on self-confidence. Suggestions for future research are
discussed.