Abstract: Human-related information security breaches within organizations are primarily caused by employees who have not been made aware of the importance of protecting the information they work with. Information security awareness is accordingly attracting more attention from industry, because stakeholders are held accountable for the information with which they work. The authors developed an Information Security Retrieval and Awareness model – entitled “ISRA" – that is tailored specifically towards enhancing information security awareness in industry amongst all users of information, to address shortcomings in existing information security awareness models. This paper is principally aimed at expounding a prototype for the ISRA model to highlight the advantages of utilizing the model. The prototype will focus on the non-technical, humanrelated information security issues in industry. The prototype will ensure that all stakeholders in an organization are part of an information security awareness process, and that these stakeholders are able to retrieve specific information related to information security issues relevant to their job category, preventing them from being overburdened with redundant information.
Abstract: In this work, the surgical theater of a local hospital in
KSA was analyzed using simulation. The focus was on attempting to
answer questions related to how many Operating Rooms (ORs) to
open and to analyze the performance of the surgical theater in
general and mainly the Post Anesthesia Care Unit (PACU) to assist
making decisions regarding PACU staffing. The surgical theater
consists of ten operating rooms and the PACU unit which has a
maximum capacity of fifteen beds. Different sequencing rules to
sequence the surgical cases were tested and the Longest Case First
(LCF) were superior to others. The results of the different
alternatives developed and tested can be used by the manager as a
tool to plan and manage the OR and PACU
Abstract: Due to the environmental and price issues of current
energy crisis, scientists and technologists around the globe are
intensively searching for new environmentally less-impact form of
clean energy that will reduce the high dependency on fossil fuel.
Particularly hydrogen can be produced from biomass via thermochemical
processes including pyrolysis and gasification due to the
economic advantage and can be further enhanced through in-situ
carbon dioxide removal using calcium oxide. This work focuses on
the synthesis and development of the flowsheet for the enhanced
biomass gasification process in PETRONAS-s iCON process
simulation software. This hydrogen prediction model is conducted at
operating temperature between 600 to 1000oC at atmospheric
pressure. Effects of temperature, steam-to-biomass ratio and
adsorbent-to-biomass ratio were studied and 0.85 mol fraction of
hydrogen is predicted in the product gas. Comparisons of the results
are also made with experimental data from literature. The
preliminary economic potential of developed system is RM 12.57 x
106 which equivalent to USD 3.77 x 106 annually shows economic
viability of this process.
Abstract: In a world worried about water resources with the
shadow of drought and famine looming all around, the quality of
water is as important as its quantity. The source of all concerns is the
constant reduction of per capita quality water for different uses.
Iran With an average annual precipitation of 250 mm compared to
the 800 mm world average, Iran is considered a water scarce country
and the disparity in the rainfall distribution, the limitations of
renewable resources and the population concentration in the margins
of desert and water scarce areas have intensified the problem.
The shortage of per capita renewable freshwater and its poor
quality in large areas of the country, which have saline, brackish or
hard water resources, and the profusion of natural and artificial
pollutant have caused the deterioration of water quality.
Among methods of treatment and use of these waters one can refer
to the application of membrane technologies, which have come into
focus in recent years due to their great advantages. This process is
quite efficient in eliminating multi-capacity ions; and due to the
possibilities of production at different capacities, application as
treatment process in points of use, and the need for less energy in
comparison to Reverse Osmosis processes, it can revolutionize the
water and wastewater sector in years to come. The article studied the
different capacities of water resources in the Persian Gulf and Oman
Sea watershed basins, and processes the possibility of using
nanofiltration process to treat brackish and non-conventional waters
in these basins.
Abstract: Weather systems use enormously complex
combinations of numerical tools for study and forecasting.
Unfortunately, due to phenomena in the world climate, such
as the greenhouse effect, classical models may become
insufficient mostly because they lack adaptation. Therefore,
the weather forecast problem is matched for heuristic
approaches, such as Evolutionary Algorithms.
Experimentation with heuristic methods like Particle Swarm
Optimization (PSO) algorithm can lead to the development of
new insights or promising models that can be fine tuned with
more focused techniques. This paper describes a PSO
approach for analysis and prediction of data and provides
experimental results of the aforementioned method on realworld
meteorological time series.
Abstract: The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.
Abstract: Radial flow reactor was focused for large scale
methanol synthesis and in which the heat transfer type was cross-flow.
The effects of operating conditions including the reactor inlet air
temperature, the heating pipe temperature and the air flow rate on the
cross-flow heat transfer was investigated and the results showed that
the temperature profile of the area in front of the heating pipe was
slightly affected by all the operating conditions. The main area whose
temperature profile was influenced was the area behind the heating
pipe. The heat transfer direction according to the air flow directions. In
order to provide the basis for radial flow reactor design calculation, the
dimensionless number group method was used for data fitting of the
bed effective thermal conductivity and the wall heat transfer
coefficient which was calculated by the mathematical model with the
product of Reynolds number and Prandtl number. The comparison of
experimental data and calculated value showed that the calculated
value fit the experimental data very well and the formulas could be
used for reactor designing calculation.
Abstract: Seismic design may require non-conventional
concept, due to the fact that the stiffness and layout of the structure
have a great effect on the overall structural behaviour, on the seismic
load intensity as well as on the internal force distribution. To find an
economical and optimal structural configuration the key issue is the
optimal design of the lateral load resisting system. This paper focuses
on the optimal design of regular, concentric braced frame (CBF)
multi-storey steel building structures. The optimal configurations are
determined by a numerical method using genetic algorithm approach,
developed by the authors. Aim is to find structural configurations
with minimum structural cost. The design constraints of objective
function are assigned in accordance with Eurocode 3 and Eurocode 8
guidelines. In this paper the results are presented for various building
geometries, different seismic intensities, and levels of energy
dissipation.
Abstract: The possibility of using cassava residue containing
49.66% starch, 21.47% cellulose, 12.97% hemicellulose, and 21.86%
lignin as a raw material to produce glucose using enzymatic
hydrolysis was investigated. In the experiment, each reactor
contained the cassava residue, bacteria cells, and production medium.
The effects of particles size (40 mesh and 60 mesh) and strains of
bacteria (A002 and M015) isolated from Thai higher termites,
Microcerotermes sp., on the glucose concentration at 37°C were
focused. High performance liquid chromatography (HPLC) with a
refractive index detector was used to determine the quantity of
glucose. The maximum glucose concentration obtained at 37°C using
strain A002 and 60 mesh of the cassava residue was 1.51 g/L at 10 h.
Abstract: With the rapid development in the field of life
sciences and the flooding of genomic information, the need for faster
and scalable searching methods has become urgent. One of the
approaches that were investigated is indexing. The indexing methods
have been categorized into three categories which are the lengthbased
index algorithms, transformation-based algorithms and mixed
techniques-based algorithms. In this research, we focused on the
transformation based methods. We embedded the N-gram method
into the transformation-based method to build an inverted index
table. We then applied the parallel methods to speed up the index
building time and to reduce the overall retrieval time when querying
the genomic database. Our experiments show that the use of N-Gram
transformation algorithm is an economical solution; it saves time and
space too. The result shows that the size of the index is smaller than
the size of the dataset when the size of N-Gram is 5 and 6. The
parallel N-Gram transformation algorithm-s results indicate that the
uses of parallel programming with large dataset are promising which
can be improved further.
Abstract: In recent years, a number of works proposing the
combination of multiple classifiers to produce a single
classification have been reported in remote sensing literature. The
resulting classifier, referred to as an ensemble classifier, is
generally found to be more accurate than any of the individual
classifiers making up the ensemble. As accuracy is the primary
concern, much of the research in the field of land cover
classification is focused on improving classification accuracy. This
study compares the performance of four ensemble approaches
(boosting, bagging, DECORATE and random subspace) with a
univariate decision tree as base classifier. Two training datasets,
one without ant noise and other with 20 percent noise was used to
judge the performance of different ensemble approaches. Results
with noise free data set suggest an improvement of about 4% in
classification accuracy with all ensemble approaches in
comparison to the results provided by univariate decision tree
classifier. Highest classification accuracy of 87.43% was achieved
by boosted decision tree. A comparison of results with noisy data
set suggests that bagging, DECORATE and random subspace
approaches works well with this data whereas the performance of
boosted decision tree degrades and a classification accuracy of
79.7% is achieved which is even lower than that is achieved (i.e.
80.02%) by using unboosted decision tree classifier.
Abstract: In this paper, the full state feedback controllers
capable of regulating and tracking the speed trajectory are presented.
A fourth order nonlinear mean value model of a 448 kW turbocharged
diesel engine published earlier is used for the purpose.
For designing controllers, the nonlinear model is linearized and
represented in state-space form. Full state feedback controllers
capable of meeting varying speed demands of drivers are presented.
Main focus here is to investigate sensitivity of the controller to the
perturbations in the parameters of the original nonlinear model.
Suggested controller is shown to be highly insensitive to the
parameter variations. This indicates that the controller is likely
perform with same accuracy even after significant wear and tear of
engine due to its use for years.
Abstract: The ability of UML to handle the modeling process of complex industrial software applications has increased its popularity to the extent of becoming the de-facto language in serving the design purpose. Although, its rich graphical notation naturally oriented towards the object-oriented concept, facilitates the understandability, it hardly successes to report all domainspecific aspects in a satisfactory way. OCL, as the standard language for expressing additional constraints on UML models, has great potential to help improve expressiveness. Unfortunately, it suffers from a weak formalism due to its poor semantic resulting in many obstacles towards the build of tools support and thus its application in the industry field. For this reason, many researches were established to formalize OCL expressions using a more rigorous approach. Our contribution join this work in a complementary way since it focuses specifically on OCL predefined properties which constitute an important part in the construction of OCL expressions. Using formal methods, we mainly succeed in expressing rigorously OCL predefined functions.
Abstract: The number of features required to represent an image
can be very huge. Using all available features to recognize objects
can suffer from curse dimensionality. Feature selection and
extraction is the pre-processing step of image mining. Main issues in
analyzing images is the effective identification of features and
another one is extracting them. The mining problem that has been
focused is the grouping of features for different shapes. Experiments
have been conducted by using shape outline as the features. Shape
outline readings are put through normalization and dimensionality
reduction process using an eigenvector based method to produce a
new set of readings. After this pre-processing step data will be
grouped through their shapes. Through statistical analysis, these
readings together with peak measures a robust classification and
recognition process is achieved. Tests showed that the suggested
methods are able to automatically recognize objects through their
shapes. Finally, experiments also demonstrate the system invariance
to rotation, translation, scale, reflection and to a small degree of
distortion.
Abstract: There are three approaches to complete Bayesian
Network (BN) model construction: total expert-centred, total datacentred,
and semi data-centred. These three approaches constitute the
basis of the empirical investigation undertaken and reported in this
paper. The objective is to determine, amongst these three
approaches, which is the optimal approach for the construction of a
BN-based model for the performance assessment of students-
laboratory work in a virtual electronic laboratory environment. BN
models were constructed using all three approaches, with respect to
the focus domain, and compared using a set of optimality criteria. In
addition, the impact of the size and source of the training, on the
performance of total data-centred and semi data-centred models was
investigated. The results of the investigation provide additional
insight for BN model constructors and contribute to literature
providing supportive evidence for the conceptual feasibility and
efficiency of structure and parameter learning from data. In addition,
the results highlight other interesting themes.
Abstract: This paper investigates the spatial structure of employment in the Jakarta Metropolitan Area (JMA), with reference to the concept of the Southeast Asian extended metropolitan region (EMR). A combination of factor analysis and local Getis-Ord (Gi*) hot-spot analysis is used to identify clusters of employment in the region, including those of the urban and agriculture sectors. Spatial statistical analysis is further used to probe the spatial association of identified employment clusters with their surroundings on several dimensions, including the spatial association between the central business district (CBD) in Jakarta city on employment density in the region, the spatial impacts of urban expansion on population growth and the degree of urban-rural interaction. The degree of spatial interaction for the whole JMA is measured by the patterns of commuting trips destined to the various employment clusters. Results reveal the strong role of the urban core of Jakarta, and the regional CBD, as the centre for mixed job sectors such as retail, wholesale, services and finance. Manufacturing and local government services, on the other hand, form corridors radiating out of the urban core, reaching out to the agriculture zones in the fringes. Strong associations between the urban expansion corridors and population growth, and urban-rural mix, are revealed particularly in the eastern and western parts of JMA. Metropolitan wide commuting patterns are focussed on the urban core of Jakarta and the CBD, while relatively local commuting patterns are shown to be prevalent for the employment corridors.
Abstract: Documents clustering become an essential technology
with the popularity of the Internet. That also means that fast and
high-quality document clustering technique play core topics. Text
clustering or shortly clustering is about discovering semantically
related groups in an unstructured collection of documents. Clustering
has been very popular for a long time because it provides unique
ways of digesting and generalizing large amounts of information.
One of the issues of clustering is to extract proper feature (concept)
of a problem domain. The existing clustering technology mainly
focuses on term weight calculation. To achieve more accurate
document clustering, more informative features including concept
weight are important. Feature Selection is important for clustering
process because some of the irrelevant or redundant feature may
misguide the clustering results. To counteract this issue, the proposed
system presents the concept weight for text clustering system
developed based on a k-means algorithm in accordance with the
principles of ontology so that the important of words of a cluster can
be identified by the weight values. To a certain extent, it has resolved
the semantic problem in specific areas.
Abstract: This paper proposes a method, combining color and layout features, for identifying documents captured from low-resolution handheld devices. On one hand, the document image color density surface is estimated and represented with an equivalent ellipse and on the other hand, the document shallow layout structure is computed and hierarchically represented. Our identification method first uses the color information in the documents in order to focus the search space on documents having a similar color distribution, and finally selects the document having the most similar layout structure in the remaining of the search space.
Abstract: In the past years, the world has witnessed significant work in the field of Manufacturing. Special efforts have been made in the implementation of new technologies, management and control systems, among many others which have all evolved the field. Closely following all this, due to the scope of new projects and the need of turning the existing flexible ideas into more autonomous and intelligent ones, i.e.: moving toward a more intelligent manufacturing, the present paper emerges with the main aim of contributing to the analysis and a few customization issues of a new iCIM 3000 system at the IPSAM. In this process, special emphasis in made on the material flow problem. For this, besides offering a description and analysis of the system and its main parts, also some tips on how to define other possible alternative material flow scenarios and a partial analysis of the combinatorial nature of the problem are offered as well. All this is done with the intentions of relating it with the use of simulation tools, for which these have been briefly addressed with a special focus on the Witness simulation package. For a better comprehension, the previous elements are supported by a few figures and expressions which would help obtaining necessary data. Such data and others will be used in the future, when simulating the scenarios in the search of the best material flow configurations.
Abstract: Lately, significant work in the area of Intelligent
Manufacturing has become public and mainly applied within the
frame of industrial purposes. Special efforts have been made in the
implementation of new technologies, management and control
systems, among many others which have all evolved the field. Aware
of all this and due to the scope of new projects and the need of
turning the existing flexible ideas into more autonomous and
intelligent ones, i.e.: Intelligent Manufacturing, the present paper
emerges with the main aim of contributing to the design and analysis
of the material flow in either systems, cells or work stations under
this new “intelligent" denomination. For this, besides offering a
conceptual basis in some of the key points to be taken into account
and some general principles to consider in the design and analysis of
the material flow, also some tips on how to define other possible
alternative material flow scenarios and a classification of the states a
system, cell or workstation are offered as well. All this is done with
the intentions of relating it with the use of simulation tools, for which
these have been briefly addressed with a special focus on the Witness
simulation package. For a better comprehension, the previous
elements are supported by a detailed layout, other figures and a few
expressions which could help obtaining necessary data. Such data and
others will be used in the future, when simulating the scenarios in the
search of the best material flow configurations.