Abstract: 21st century has transformed the labor market
landscape in a way of posing new and different demands on
university graduates as well as university lecturers, which means that
the knowledge and academic skills students acquire in the course of
their studies should be applicable and transferable from the higher
education context to their future professional careers. Given the
context of the Languages for Specific Purposes (LSP) classroom, the
teachers’ objective is not only to teach the language itself, but also to
prepare students to use that language as a medium to develop generic
skills and competences. These include media and information
literacy, critical and creative thinking, problem-solving and analytical
skills, effective written and oral communication, as well as
collaborative work and social skills, all of which are necessary to
make university graduates more competitive in everyday professional
environments. On the other hand, due to limitations of time and large
numbers of students in classes, the frequently topic-centered syllabus
of LSP courses places considerable focus on acquiring the subject
matter and specialist vocabulary instead of sufficient development of
skills and competences required by students’ prospective employers.
This paper intends to explore some of those issues as viewed both by
LSP lecturers and by business professionals in their respective
surveys. The surveys were conducted among more than 50 LSP
lecturers at higher education institutions in Croatia, more than 40 HR
professionals and more than 60 university graduates with degrees in
economics and/or business working in management positions in
mainly large and medium-sized companies in Croatia. Various elements of LSP course content have been taken into
consideration in this research, including reading and listening
comprehension of specialist texts, acquisition of specialist vocabulary
and grammatical structures, as well as presentation and negotiation
skills. The ability to hold meetings, conduct business correspondence,
write reports, academic texts, case studies and take part in debates
were also taken into consideration, as well as informal business
communication, business etiquette and core courses delivered in a
foreign language. The results of the surveys conducted among LSP
lecturers will be analyzed with reference to what extent those
elements are included in their courses and how consistently and
thoroughly they are evaluated according to their course requirements.
Their opinions will be compared to the results of the surveys
conducted among professionals from a range of industries in Croatia
so as to examine how useful and important they perceive the same
elements of the LSP course content in their working environments.
Such comparative analysis will thus show to what extent the syllabi
of LSP courses meet the demands of the employment market when it
comes to the students’ language skills and competences, as well as
transferable skills. Finally, the findings will also be compared to the
observations based on practical teaching experience and the relevant
sources that have been used in this research. In conclusion, the ideas and observations in this paper are merely
open-ended questions that do not have conclusive answers, but might
prompt LSP lecturers to re-evaluate the content and objectives of
their course syllabi.
Abstract: Data fusion technology can be the best way to extract
useful information from multiple sources of data. It has been widely
applied in various applications. This paper presents a data fusion
approach in multimedia data for event detection in twitter by using
Dempster-Shafer evidence theory. The methodology applies a mining
algorithm to detect the event. There are two types of data in the
fusion. The first is features extracted from text by using the bag-ofwords
method which is calculated using the term frequency-inverse
document frequency (TF-IDF). The second is the visual features
extracted by applying scale-invariant feature transform (SIFT). The
Dempster - Shafer theory of evidence is applied in order to fuse the
information from these two sources. Our experiments have indicated
that comparing to the approaches using individual data source, the
proposed data fusion approach can increase the prediction accuracy
for event detection. The experimental result showed that the proposed
method achieved a high accuracy of 0.97, comparing with 0.93 with
texts only, and 0.86 with images only.
Abstract: Ant algorithms are well-known metaheuristics which
have been widely used since two decades. In most of the literature,
an ant is a constructive heuristic able to build a solution from scratch.
However, other types of ant algorithms have recently emerged: the
discussion is thus not limited by the common framework of the
constructive ant algorithms. Generally, at each generation of an ant
algorithm, each ant builds a solution step by step by adding an
element to it. Each choice is based on the greedy force (also called the
visibility, the short term profit or the heuristic information) and the
trail system (central memory which collects historical information of
the search process). Usually, all the ants of the population have the
same characteristics and behaviors. In contrast in this paper, a new
type of ant metaheuristic is proposed, namely SMART (for Solution
Methods with Ants Running by Types). It relies on the use of different
population of ants, where each population has its own personality.
Abstract: Frequency transformation with Pascal matrix
equations is a method for transforming an electronic filter (analogue
or digital) into another filter. The technique is based on frequency
transformation in the s-domain, bilinear z-transform with pre-warping
frequency, inverse bilinear transformation and a very useful
application of the Pascal’s triangle that simplifies computing and
enables calculation by hand when transforming from one filter to
another. This paper will introduce two methods to transform a filter
into a digital filter: frequency transformation from the s-domain into
the z-domain; and frequency transformation in the z-domain. Further,
two Pascal matrix equations are derived: an analogue to digital filter
Pascal matrix equation and a digital to digital filter Pascal matrix
equation. These are used to design a desired digital filter from a given
filter.
Abstract: For several hundred years, the design of railway tracks
has practically remained unchanged. Traditionally, rail tracks are
placed on a ballast layer due to several reasons, including economy,
rapid drainage, and high load bearing capacity. The primary function
of ballast is to distributing dynamic track loads to sub-ballast and
subgrade layers, while also providing lateral resistance and allowing
for rapid drainage. Upon repeated trainloads, the ballast becomes
fouled due to ballast degradation and the intrusion of fines which
adversely affects the strength and deformation behaviour of ballast.
This paper presents the use of three-dimensional discrete element
method (DEM) in studying the shear behaviour of the fouled ballast
subjected to direct shear loading. Irregularly shaped particles of
ballast were modelled by grouping many spherical balls together in
appropriate sizes to simulate representative ballast aggregates. Fouled
ballast was modelled by injecting a specified number of miniature
spherical particles into the void spaces. The DEM simulation
highlights that the peak shear stress of the ballast assembly decreases
and the dilation of fouled ballast increases with an increase level of
fouling. Additionally, the distributions of contact force chain and
particle displacement vectors were captured during shearing progress,
explaining the formation of shear band and the evolutions of
volumetric change of fouled ballast.
Abstract: In the deep south of Thailand, checkpoints for people
verification are necessary for the security management of risk zones,
such as official buildings in the conflict area. In this paper, we
propose an automatic checkpoint system that verifies persons using
information from ID cards and facial features. The methods for a
person’s information abstraction and verification are introduced
based on useful information such as ID number and name, extracted
from official cards, and facial images from videos. The proposed
system shows promising results and has a real impact on the local
society.
Abstract: Introduction: To update ourselves and understand the
concept of latest electronic formats available for Health care
providers and how it could be used and developed as per standards.
The idea is to correlate between the patients Manual Medical Records
keeping and maintaining patients Electronic Information in a Health
care setup in this world. Furthermore, this stands with adapting to the
right technology depending upon the organization and improve our
quality and quantity of Healthcare providing skills. Objective: The
concept and theory is to explain the terms of Electronic Medical
Record (EMR), Electronic Health Record (EHR) and Personal Health
Record (PHR) and selecting the best technical among the available
Electronic sources and software before implementing. It is to guide
and make sure the technology used by the end users without any
doubts and difficulties. The idea is to evaluate is to admire the uses
and barriers of EMR-EHR-PHR. Aim and Scope: The target is to
achieve the health care providers like Physicians, Nurses, Therapists,
Medical Bill reimbursements, Insurances and Government to assess
the patient’s information on easy and systematic manner without
diluting the confidentiality of patient’s information. Method: Health
Information Technology can be implemented with the help of
Organisations providing with legal guidelines and help to stand by
the health care provider. The main objective is to select the correct
embedded and affordable database management software and
generating large-scale data. The parallel need is to know how the
latest software available in the market. Conclusion: The question lies
here is implementing the Electronic information system with
healthcare providers and organization. The clinicians are the main
users of the technology and manage us to “go paperless”. The fact is
that day today changing technologically is very sound and up to date.
Basically, the idea is to tell how to store the data electronically safe
and secure. All three exemplifies the fact that an electronic format
has its own benefit as well as barriers.
Abstract: This paper proposes a method of learning topics for
broadcasting contents. There are two kinds of texts related to
broadcasting contents. One is a broadcasting script, which is a series of
texts including directions and dialogues. The other is blogposts, which
possesses relatively abstracted contents, stories, and diverse
information of broadcasting contents. Although two texts range over
similar broadcasting contents, words in blogposts and broadcasting
script are different. When unseen words appear, it needs a method to
reflect to existing topic. In this paper, we introduce a semantic
vocabulary expansion method to reflect unseen words. We expand
topics of the broadcasting script by incorporating the words in
blogposts. Each word in blogposts is added to the most semantically
correlated topics. We use word2vec to get the semantic correlation
between words in blogposts and topics of scripts. The vocabularies of
topics are updated and then posterior inference is performed to
rearrange the topics. In experiments, we verified that the proposed
method can discover more salient topics for broadcasting contents.
Abstract: This paper outlines the development of an
experimental technique in quantifying supersonic jet flows, in an
attempt to avoid seeding particle problems frequently associated with
particle-image velocimetry (PIV) techniques at high Mach numbers.
Based on optical flow algorithms, the idea behind the technique
involves using high speed cameras to capture Schlieren images of the
supersonic jet shear layers, before they are subjected to an adapted
optical flow algorithm based on the Horn-Schnuck method to
determine the associated flow fields. The proposed method is capable
of offering full-field unsteady flow information with potentially
higher accuracy and resolution than existing point-measurements or
PIV techniques. Preliminary study via numerical simulations of a
circular de Laval jet nozzle successfully reveals flow and shock
structures typically associated with supersonic jet flows, which serve
as useful data for subsequent validation of the optical flow based
experimental results. For experimental technique, a Z-type Schlieren
setup is proposed with supersonic jet operated in cold mode,
stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe
or double-frame cameras are used to capture successive
Schlieren images. As implementation of optical flow technique to
supersonic flows remains rare, the current focus revolves around
methodology validation through synthetic images. The results of
validation test offers valuable insight into how the optical flow
algorithm can be further improved to improve robustness and
accuracy. Despite these challenges however, this supersonic flow
measurement technique may potentially offer a simpler way to
identify and quantify the fine spatial structures within the shock shear
layer.
Abstract: Fractal based digital image compression is a specific
technique in the field of color image. The method is best suited for
irregular shape of image like snow bobs, clouds, flame of fire; tree
leaves images, depending on the fact that parts of an image often
resemble with other parts of the same image. This technique has
drawn much attention in recent years because of very high
compression ratio that can be achieved. Hybrid scheme incorporating
fractal compression and speedup techniques have achieved high
compression ratio compared to pure fractal compression. Fractal
image compression is a lossy compression method in which selfsimilarity
nature of an image is used. This technique provides high
compression ratio, less encoding time and fart decoding process. In
this paper, fractal compression with quad tree and DCT is proposed
to compress the color image. The proposed hybrid schemes require
four phases to compress the color image. First: the image is
segmented and Discrete Cosine Transform is applied to each block of
the segmented image. Second: the block values are scanned in a
zigzag manner to prevent zero co-efficient. Third: the resulting image
is partitioned as fractals by quadtree approach. Fourth: the image is
compressed using Run length encoding technique.
Abstract: The present study aims to explore the effect of
computerization on marketing performance in Snowa Company. In
other words, this study intends to respond to this question that
whether or not, is there any relationship between utilization of
computerization in marketing activities and marketing performance?
The statistical population included 60 marketing managers of Snowa
Company. In order to test the research hypotheses, Pearson
correlation coefficient was employed. The reliability was equal to
96.8%. In this study, computerization was the independent variable
and marketing performance was the dependent variable with
characteristics of market share, improving the competitive position,
and sales volume. The results of testing the hypotheses revealed that
there is a significant relationship between utilization of
computerization and market share, sales volume and improving the
competitive position.
Abstract: The purpose of the research described in this work is
to answer how to measure the rheologic (viscoelastic) properties
tendo–deformational characteristics of soft tissue. The method would
also resemble muscle palpation examination as it is known in clinical
practice. For this purpose, an instrument with the working name
“myotonometer” has been used. At present, there is lack of objective methods for assessing the
muscle tone by viscous and elastic properties of soft tissue. That is
why we decided to focus on creating or finding quantitative and
qualitative methodology capable to specify muscle tone.
Abstract: A 15-storey RC building, studied in this paper, is
representative of modern building type constructed in Madina City in
Saudi Arabia before 10 years ago. These buildings are almost
consisting of reinforced concrete skeleton i.e. columns, beams and
flat slab as well as shear walls in the stairs and elevator areas
arranged in the way to have a resistance system for lateral loads
(wind – earthquake loads). In this study, the dynamic properties of
the 15-storey RC building were identified using ambient motions
recorded at several, spatially-distributed locations within each
building. Three dimensional pushover analysis (Nonlinear static
analysis) was carried out using SAP2000 software incorporating
inelastic material properties for concrete, infill and steel. The effect
of modeling the building with and without infill walls, on the
performance point as well as capacity and demand spectra due to EQ
design spectrum function in Madina area has been investigated. ATC-
40 capacity and demand spectra are utilized to get the modification
factor (R) for the studied building. The purpose of this analysis is to
evaluate the expected performance of structural systems by
estimating, strength and deformation demands in design, and
comparing these demands to available capacities at the performance
levels of interest. The results are summarized and discussed.
Abstract: E-government has been adopted and used by many governments/countries around the world including Ghana to provide citizens and businesses with more accurate, real-time, and high quality services and information. The objective of this paper is to present an overview of the Government of Ghana’s (GoG) adoption and implement of e-government and its usage by the Ministries, Departments and its agencies (MDAs) as well as other public sector institutions to deliver efficient public service to the general public i.e. citizens, business etc. Government implementation of e-government focused on facilitating effective delivery of government service to the public and ultimately to provide efficient government-wide electronic means of sharing information and knowledge through a network infrastructure developed to connect all major towns and cities, Ministries, Departments and Agencies and other public sector organizations in Ghana. One aim for the Government of Ghana use of ICT in public administration is to improve productivity in government administration and service by facilitating exchange of information to enable better interaction and coordination of work among MDAs, citizens and private businesses. The study was prepared using secondary sources of data from government policy documents, national and international published reports, journal articles, and web sources. This study indicates that through the e-government initiative, currently citizens and businesses can access and pay for services such as renewal of driving license, business registration, payment of taxes, acquisition of marriage and birth certificates as well as application for passport through the GoG electronic service (eservice) and electronic payment (epay) portal. Further, this study shows that there is enormous commitment from GoG to adopt and implement e-government as a tool not only to transform the business of government but also to bring efficiency in public services delivered by the MDAs. To ascertain this, a further study need to be carried out to determine if the use of e-government has brought about the anticipated improvements and efficiency in service delivery of MDAs and other state institutions in Ghana.
Abstract: This paper explored the challenges faced by the
management of a Ghanaian state enterprise in managing conflicts and
disturbances associated with its attempt to implement new work
practices to enhance its capability to operate as a commercial entity.
The purpose was to understand the extent to which organizational
involvement, consistency and adaptability influence employees’
consumption of new work practices in transforming the
organization’s organizational activity system. Using selfadministered
questionnaires, data were collected from one hundred
and eighty (180) employees and analyzed using both descriptive and
inferential statistics. The results showed that constraints in
organizational involvement and adaptability prevented the positive
consumption of new work practices by employees in the
organization. It is also found that the organization’s employees failed
to consume the new practices being implemented, because they
perceived the process as non-involving, and as such, did not
encourage the development of employee capability, empowerment,
and teamwork. The study concluded that the failure of the
organization’s management to create opportunities for organizational
learning constrained its ability to get employees consume the new
work practices, which situation could have facilitated the
organization’s capabilities of operating as a commercial entity.
Abstract: In order to retrieve images efficiently from a large
database, a unique method integrating color and texture features
using genetic programming has been proposed. Opponent color
histogram which gives shadow, shade, and light intensity invariant
property is employed in the proposed framework for extracting color
features. For texture feature extraction, fast discrete curvelet
transform which captures more orientation information at different
scales is incorporated to represent curved like edges. The recent
scenario in the issues of image retrieval is to reduce the semantic gap
between user’s preference and low level features. To address this
concern, genetic algorithm combined with relevance feedback is
embedded to reduce semantic gap and retrieve user’s preference
images. Extensive and comparative experiments have been conducted
to evaluate proposed framework for content based image retrieval on
two databases, i.e., COIL-100 and Corel-1000. Experimental results
clearly show that the proposed system surpassed other existing
systems in terms of precision and recall. The proposed work achieves
highest performance with average precision of 88.2% on COIL-100
and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3%
on Corel. Thus, the experimental results confirm that the proposed
content based image retrieval system architecture attains better
solution for image retrieval.
Abstract: The introduction of a multitude of new and interactive
e-commerce information technology (IT) artifacts has impacted
adoption research. Rather than solely functioning as productivity
tools, new IT artifacts assume the roles of interaction mediators and
social actors. This paper describes the varying roles assumed by IT
artifacts, and proposes and distinguishes between four distinct foci of
how the artifacts are evaluated. It further proposes a theoretical
model that maps the different views of IT artifacts to four distinct
types of evaluations.
Abstract: Scripts are one of the basic text resources to understand
broadcasting contents. Topic modeling is the method to get the
summary of the broadcasting contents from its scripts. Generally,
scripts represent contents descriptively with directions and speeches,
and provide scene segments that can be seen as semantic units.
Therefore, a script can be topic modeled by treating a scene segment
as a document. Because scene segments consist of speeches mainly,
however, relatively small co-occurrences among words in the scene
segments are observed. This causes inevitably the bad quality of
topics by statistical learning method. To tackle this problem, we
propose a method to improve topic quality with additional word
co-occurrence information obtained using scene similarities. The
main idea of improving topic quality is that the information that
two or more texts are topically related can be useful to learn high
quality of topics. In addition, more accurate topical representations
lead to get information more accurate whether two texts are related
or not. In this paper, we regard two scene segments are related
if their topical similarity is high enough. We also consider that
words are co-occurred if they are in topically related scene segments
together. By iteratively inferring topics and determining semantically
neighborhood scene segments, we draw a topic space represents
broadcasting contents well. In the experiments, we showed the
proposed method generates a higher quality of topics from Korean
drama scripts than the baselines.
Abstract: Multiple Sclerosis (MS) is a disease which affects the
central nervous system and causes balance problem. In clinical, this
disorder is usually evaluated using static posturography. Some linear
or nonlinear measures, extracted from the posturographic data (i.e.
center of pressure, COP) recorded during a balance test, has been
used to analyze postural control of MS patients. In this study, the
trend (TREND) and the sample entropy (SampEn), two nonlinear
parameters were chosen to investigate their relationships with the
expanded disability status scale (EDSS) score. 40 volunteers with
different EDSS scores participated in our experiments with eyes open
(EO) and closed (EC). TREND and 2 types of SampEn (SampEn1
and SampEn2) were calculated for each combined COP’s position
signal. The results have shown that TREND had a weak negative
correlation to EDSS while SampEn2 had a strong positive correlation
to EDSS. Compared to TREND and SampEn1, SampEn2 showed a
better significant correlation to EDSS and an ability to discriminate
the MS patients in the EC case. In addition, the outcome of the study
suggests that the multi-dimensional nonlinear analysis could provide
some information about the impact of disability progression in MS on
dynamics of the COP data.
Abstract: In this research, we propose to conduct diagnostic and
predictive analysis about the key factors and consequences of urban
population relocation. To achieve this goal, urban simulation models
extract the urban development trends as land use change patterns from
a variety of data sources. The results are treated as part of urban big
data with other information such as population change and economic
conditions. Multiple data mining methods are deployed on this data to
analyze nonlinear relationships between parameters. The result
determines the driving force of population relocation with respect to
urban sprawl and urban sustainability and their related parameters.
This work sets the stage for developing a comprehensive urban
simulation model for catering to specific questions by targeted users. It
contributes towards achieving sustainability as a whole.