Abstract: Composite material based on Fe3Si micro-particles
and Mn-Zn nano-ferrite was prepared using powder metallurgy
technology. The sol-gel followed by autocombustion process was
used for synthesis of Mn0.8Zn0.2Fe2O4 ferrite. 3 wt.% of mechanically
milled ferrite was mixed with Fe3Si powder alloy. Mixed micro-nano
powder system was homogenized by the Resonant Acoustic Mixing
using ResodynLabRAM Mixer. This non-invasive homogenization
technique was used to preserve spherical morphology of Fe3Si
powder particles. Uniaxial cold pressing in the closed die at pressure
600 MPa was applied to obtain a compact sample. Microwave
sintering of green compact was realized at 800°C, 20 minutes, in air.
Density of the powders and composite was measured by
Hepycnometry. Impulse excitation method was used to measure
elastic properties of sintered composite. Mechanical properties were
evaluated by measurement of transverse rupture strength (TRS) and
Vickers hardness (HV). Resistivity was measured by 4 point probe
method. Ferrite phase distribution in volume of the composite was
documented by metallographic analysis.
It has been found that nano-ferrite particle distributed among
micro- particles of Fe3Si powder alloy led to high relative density
(~93%) and suitable mechanical properties (TRS >100 MPa, HV
~1GPa, E-modulus ~140 GPa) of the composite. High electric
resistivity (R~6.7 ohm.cm) of prepared composite indicate their
potential application as soft magnetic material at medium and high
frequencies.
Abstract: The web’s increased popularity has included a huge
amount of information, due to which automated web page
classification systems are essential to improve search engines’
performance. Web pages have many features like HTML or XML
tags, hyperlinks, URLs and text contents which can be considered
during an automated classification process. It is known that Webpage
classification is enhanced by hyperlinks as it reflects Web page
linkages. The aim of this study is to reduce the number of features to
be used to improve the accuracy of the classification of web pages. In
this paper, a novel feature selection method using an improved
Particle Swarm Optimization (PSO) using principle of evolution is
proposed. The extracted features were tested on the WebKB dataset
using a parallel Neural Network to reduce the computational cost.
Abstract: Qatar, a Gulf country highly dependent on its oil and
gas revenues – is looking to innovate, diversify, and ultimately reach
its aim of creating a knowledge economy to prepare for its post-oil
era. One area that the country is investing in is Contemporary Art,
and world renowned artists such as Damien Hirst and Richard Serra –
have been commissioned to design site-specific art for the public
spaces of the city of Doha as well as in more remote desert locations.
This research discusses the changing presence, role and context of
public art in Doha, both from a historical and cultural overview, and
the different forms and media as well as the typologies of urban and
public spaces in which the art is installed. It examines the process of
implementing site-specific artworks, looking at questions of scale,
history, social meaning and formal aesthetics. The methodologies
combine theoretical research on the understanding of public art and
its role and placement in public space, as well as empirical research
on contemporary public art projects in Doha, based on documentation
and interviews and as well as site and context analysis of the urban or
architectural spaces within which the art is situated. Surveys and
interviews – using social media - in different segments of the
contemporary Qatari society, including all nationalities and social
groups, are used to measure and qualify the impacts and effects on
the population.
Abstract: The aim of this paper is to perform experimental
modal analysis (EMA) of reinforced concrete (RC) square slabs.
EMA is the process of determining the modal parameters (Natural
Frequencies, damping factors, modal vectors) of a structure from a
set of frequency response functions FRFs (curve fitting). Although,
experimental modal analysis (or modal testing) has grown steadily in
popularity since the advent of the digital FFT spectrum analyzer in
the early 1970’s, studying all types of members and materials using
such method have not yet been well documented. Therefore, in this
work, experimental tests were conducted on RC square slab
specimens of dimensions 600mm x 600mmx 40mm. Experimental
analysis was based on freely supported boundary condition.
Moreover, impact testing as a fast and economical means of finding
the modes of vibration of a structure was used during the
experiments. In addition, Pico Scope 6 device and MATLAB
software were used to acquire data, analyze and plot Frequency
Response Function (FRF). The experimental natural frequencies
which were extracted from measurements exhibit good agreement
with analytical predictions. It is showed that EMA method can be
usefully employed to investigate the dynamic behavior of RC slabs.
Abstract: The legends about “user-friendly” and “easy-to-use”
birotical tools (computer-related office tools) have been spreading
and misleading end-users. This approach has led us to the extremely
high number of incorrect documents, causing serious financial losses
in the creating, modifying, and retrieving processes. Our research
proved that there are at least two sources of this underachievement:
(1) The lack of the definition of the correctly edited, formatted
documents. Consequently, end-users do not know whether their
methods and results are correct or not. They are not aware of their
ignorance. They are so ignorant that their ignorance does not allow
them to realize their lack of knowledge. (2) The end-users’ problem
solving methods. We have found that in non-traditional programming
environments end-users apply, almost exclusively, surface approach
metacognitive methods to carry out their computer related activities,
which are proved less effective than deep approach methods.
Based on these findings we have developed deep approach
methods which are based on and adapted from traditional
programming languages. In this study, we focus on the most popular
type of birotical documents, the text based documents. We have
provided the definition of the correctly edited text, and based on this
definition, adapted the debugging method known in programming.
According to the method, before the realization of text editing, a
thorough debugging of already existing texts and the categorization
of errors are carried out. With this method in advance to real text
editing users learn the requirements of text based documents and also
of the correctly formatted text.
The method has been proved much more effective than the
previously applied surface approach methods. The advantages of the
method are that the real text handling requires much less human and
computer sources than clicking aimlessly in the GUI (Graphical User
Interface), and the data retrieval is much more effective than from
error-prone documents.
Abstract: The present paper summarizes the analysis of the
request for consultation of information and data on industrial
emissions made publicly available on the web site of the Ministry of
Environment, Land and Sea on integrated pollution prevention and
control from large industrial installations, the so called “AIA Portal”.
As a matter of fact, a huge amount of information on national
industrial plants is already available on internet, although it is usually
proposed as textual documentation or images.
Thus, it is not possible to access all the relevant information
through interoperability systems and also to retrieval relevant
information for decision making purposes as well as rising of
awareness on environmental issue.
Moreover, since in Italy the number of institutional and private
subjects involved in the management of the public information on
industrial emissions is substantial, the access to the information is
provided on internet web sites according to different criteria; thus, at
present it is not structurally homogeneous and comparable.
To overcome the mentioned difficulties in the case of the
Coordinating Committee for the implementation of the Agreement
for the industrial area in Taranto and Statte, operating before the
IPPC permit granting procedures of the relevant installation located
in the area, a big effort was devoted to elaborate and to validate data
and information on characterization of soil, ground water aquifer and
coastal sea at disposal of different subjects to derive a global
perspective for decision making purposes. Thus, the present paper
also focuses on main outcomes matured during such experience.
Abstract: In this paper we describe one critical research
program within a complex, ongoing multi-year project (2010 to 2014
inclusive) with the overall goal to improve the learning outcomes for
first year undergraduate commerce/business students within an
Information Systems (IS) subject with very large enrolment. The
single research program described in this paper is the analysis of
student attitudes and decision making in relation to the availability of
formative assessment feedback via Web-based real time conferencing
and document exchange software (Adobe Connect). The formative
assessment feedback between teaching staff and students is in respect
of an authentic problem-based, team-completed assignment. The
analysis of student attitudes and decision making is investigated via
both qualitative (firstly) and quantitative (secondly) application of the
Theory of Planned Behavior (TPB) with a two statistically-significant
and separate trial samples of the enrolled students. The initial
qualitative TPB investigation revealed that perceived self-efficacy,
improved time-management, and lecturer-student relationship
building were the major factors in shaping an overall favorable
student attitude to online feedback, whilst some students expressed
valid concerns with perceived control limitations identified within the
online feedback protocols. The subsequent quantitative TPB
investigation then confirmed that attitude towards usage, subjective
norms surrounding usage, and perceived behavioral control of usage
were all significant in shaping student intention to use the online
feedback protocol, with these three variables explaining 63 percent of
the variance in the behavioral intention to use the online feedback
protocol. The identification in this research of perceived behavioral
control as a significant determinant in student usage of a specific
technology component within a virtual learning environment (VLE)
suggests that VLEs could now be viewed not as a single, atomic
entity, but as a spectrum of technology offerings ranging from the
mature and simple (e.g., email, Web downloads) to the cutting-edge
and challenging (e.g., Web conferencing and real-time document
exchange). That is, that all VLEs should not be considered the same.
The results of this research suggest that tertiary students have the
technological sophistication to assess a VLE in this more selective
manner.
Abstract: Verification and Validation of Simulated Process
Model is the most important phase of the simulator life cycle.
Evaluation of simulated process models based on Verification and
Validation techniques checks the closeness of each component model
(in a simulated network) with the real system/process with respect to
dynamic behaviour under steady state and transient conditions. The
process of Verification and Validation helps in qualifying the process
simulator for the intended purpose whether it is for providing
comprehensive training or design verification. In general, model
verification is carried out by comparison of simulated component
characteristics with the original requirement to ensure that each step
in the model development process completely incorporates all the
design requirements. Validation testing is performed by comparing
the simulated process parameters to the actual plant process
parameters either in standalone mode or integrated mode.
A Full Scope Replica Operator Training Simulator for PFBR -
Prototype Fast Breeder Reactor has been developed at IGCAR,
Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder
Reactor Simulator) where in the main participants are
engineers/experts belonging to Modeling Team, Process Design and
Instrumentation & Control design team. This paper discusses about
the Verification and Validation process in general, the evaluation
procedure adopted for PFBR operator training Simulator, the
methodology followed for verifying the models, the reference
documents and standards used etc. It details out the importance of
internal validation by design experts, subsequent validation by
external agency consisting of experts from various fields, model
improvement by tuning based on expert’s comments, final
qualification of the simulator for the intended purpose and the
difficulties faced while co-coordinating various activities.
Abstract: Fast changing knowledge systems on the Internet can
be accessed more efficiently with the help of automatic document
summarization and updating techniques. The aim of multi-document
update summary generation is to construct a summary unfolding the
mainstream of data from a collection of documents based on the
hypothesis that the user has already read a set of previous documents.
In order to provide a lot of semantic information from the documents,
deeper linguistic or semantic analysis of the source documents were
used instead of relying only on document word frequencies to select
important concepts. In order to produce a responsive summary,
meaning oriented structural analysis is needed. To address this issue,
the proposed system presents a document summarization approach
based on sentence annotation with aspects, prepositions and named
entities. Semantic element extraction strategy is used to select
important concepts from documents which are used to generate
enhanced semantic summary.
Abstract: Image or document encryption is needed through egovernment
data base. Really in this paper we introduce two matrices
images, one is the public, and the second is the secret (original). The
analyses of each matrix is achieved using the transformation of
singular values decomposition. So each matrix is transformed or
analyzed to three matrices say row orthogonal basis, column
orthogonal basis, and spectral diagonal basis. Product of the two row
basis is calculated. Similarly the product of the two column basis is
achieved. Finally we transform or save the files of public, row
product and column product. In decryption stage, the original image
is deduced by mutual method of the three public files.
Abstract: Member States shall establish zones and
agglomerations throughout their territory to assess and manage air
quality in order to comply with European directives.
In Italy decree 155/2010, transposing Directive 2008/50/EC on
ambient air quality and cleaner air for Europe, merged into a single
act the previous provisions on ambient air quality assessment and
management, including those resulting from the implementation of
Directive 2004/107/EC relating to arsenic, cadmium, nickel, mercury
and polycyclic aromatic hydrocarbons in ambient air.
Decree 155/2010 introduced stricter rules for identifying zones on
the basis of the characteristics of the territory in spite of considering
pollution levels, as it was in the past. The implementation of such
new criteria has reduced the great variability of the previous zoning,
leading to a significant reduction of the total number of zones and to
a complete and uniform ambient air quality assessment and
management throughout the Country.
The present document is related to the new zones definition in
Italy according to Decree 155/2010. In particular the paper contains
the description and the analysis of the outcome of zoning and
classification.
Abstract: Validity, integrity, and impacts of the IT systems of
the US federal courts have been studied as part of the Human Rights
Alert-NGO (HRA) submission for the 2015 Universal Periodic
Review (UPR) of human rights in the United States by the Human
Rights Council (HRC) of the United Nations (UN). The current
report includes overview of IT system analysis, data-mining and case
studies. System analysis and data-mining show: Development and
implementation with no lawful authority, servers of unverified
identity, invalidity in implementation of electronic signatures,
authentication instruments and procedures, authorities and
permissions; discrimination in access against the public and
unrepresented (pro se) parties and in favor of attorneys; widespread
publication of invalid judicial records and dockets, leading to their
false representation and false enforcement. A series of case studies
documents the impacts on individuals' human rights, on banking
regulation, and on international matters. Significance is discussed in
the context of various media and expert reports, which opine
unprecedented corruption of the US justice system today, and which
question, whether the US Constitution was in fact suspended. Similar
findings were previously reported in IT systems of the State of
California and the State of Israel, which were incorporated, subject to
professional HRC staff review, into the UN UPR reports (2010 and
2013). Solutions are proposed, based on the principles of publicity of
the law and the separation of power: Reliance on US IT and legal
experts under accountability to the legislative branch, enhancing
transparency, ongoing vigilance by human rights and internet
activists. IT experts should assume more prominent civic duties in the
safeguard of civil society in our era.
Abstract: Over the last few decades, oilfield service rolling
equipment has significantly increased in weight, primarily because of
emissions regulations, which require larger/heavier engines, larger
cooling systems, and emissions after-treatment systems, in some
cases, etc. Larger engines cause more vibration and shock loads,
leading to failure of electronics and control systems.
If the vibrating frequency of the engine matches the system
frequency, high resonance is observed on structural parts and mounts.
One such existing automated control equipment system comprising
wire rope mounts used for mounting computers was designed
approximately 12 years ago. This includes the use of an industrialgrade
computer to control the system operation. The original
computer had a smaller, lighter enclosure. After a few years, a newer
computer version was introduced, which was 10 lbm heavier. Some
failures of internal computer parts have been documented for cases in
which the old mounts were used. Because of the added weight, there
is a possibility of having the two brackets impact each other under
off-road conditions, which causes a high shock input to the computer
parts. This added failure mode requires validating the existing mount
design to suit the new heavy-weight computer.
This paper discusses the modal finite element method (FEM)
analysis and experimental modal analysis conducted to study the
effects of vibration on the wire rope mounts and the computer. The
existing mount was modelled in ANSYS software, and resultant
mode shapes and frequencies were obtained. The experimental modal
analysis was conducted, and actual frequency responses were
observed and recorded.
Results clearly revealed that at resonance frequency, the brackets
were colliding and potentially causing damage to computer parts. To
solve this issue, spring mounts of different stiffness were modeled in
ANSYS software, and the resonant frequency was determined.
Increasing the stiffness of the system increased the resonant
frequency zone away from the frequency window at which the engine
showed heavy vibrations or resonance. After multiple iterations in
ANSYS software, the stiffness of the spring mount was finalized,
which was again experimentally validated.
Abstract: The globalization of markets, the need to develop
competitive advantages and core competencies, among other things,
lead organizations to increasingly cross borders to operate in other
countries. The expatriation of professionals who go to work in
another country besides their own becomes increasingly common. In
order to generate data about this issue, research was conducted
concerning the perception of expatriate employees concerning
expatriation success. The research method used was case study
through a qualitative approach. This research was done through
interviews with five India expatriates and five China expatriates,
interview with expatriate department heads and analysis of company
documents. It was found that there are differences between the
organizational perception and perception of expatriates of what
constitutes mission success. The paper also provides suggestions for
further research and suggestions for future expatriates.
Abstract: In this paper, we present a robust algorithm to recognize extracted text from grocery product images captured by mobile phone cameras. Recognition of such text is challenging since text in grocery product images varies in its size, orientation,
style, illumination, and can suffer from perspective distortion.
Pre-processing is performed to make the characters scale and
rotation invariant. Since text degradations can not be appropriately
defined using well-known geometric transformations such
as translation, rotation, affine transformation and shearing, we
use the whole character black pixels as our feature vector.
Classification is performed with minimum distance classifier
using the maximum likelihood criterion, which delivers very
promising Character Recognition Rate (CRR) of 89%. We
achieve considerably higher Word Recognition Rate (WRR) of
99% when using lower level linguistic knowledge about product
words during the recognition process.
Abstract: The use of eXtensible Markup Language (XML) in
web, business and scientific databases lead to the development of
methods, techniques and systems to manage and analyze XML data.
Semi-structured documents suffer due to its heterogeneity and
dimensionality. XML structure and content mining represent
convergence for research in semi-structured data and text mining. As
the information available on the internet grows drastically, extracting
knowledge from XML documents becomes a harder task. Certainly,
documents are often so large that the data set returned as answer to a
query may also be very big to convey the required information. To
improve the query answering, a Semantic Tree Based Association
Rule (STAR) mining method is proposed. This method provides
intentional information by considering the structure, content and the
semantics of the content. The method is applied on Reuter’s dataset
and the results show that the proposed method outperforms well.
Abstract: Knowledge management is considered as an important
factor in improving health care services. KM facilitates the transfer of
existing knowledge and the development of new knowledge in
hospitals. This paper reviews practices adopted by doctors in Kuwait
for capturing, sharing, and generating knowledge. It also discusses
the perceived impact of KM practices on performance of hospitals.
Based on a survey of 277 doctors, the study found that KM practices
among doctors in the sampled hospitals were not very effective. Little
attention was paid to the main activities that support the transfer of
expertise among doctors in hospitals. However, as predicted by
previous studies, good km practices were perceived by doctors to
have a positive impact on performance of hospitals. It was concluded
that through effective KM practices hospitals could improve the
services they provide. Documentation of best practices and capturing
of lessons learnt for re-use of knowledge could help transform the
hospitals into learning organizations.
Abstract: The growth in the volume of text data such as books
and articles in libraries for centuries has imposed to establish
effective mechanisms to locate them. Early techniques such as
abstraction, indexing and the use of classification categories have
marked the birth of a new field of research called "Information
Retrieval". Information Retrieval (IR) can be defined as the task of
defining models and systems whose purpose is to facilitate access to
a set of documents in electronic form (corpus) to allow a user to find
the relevant ones for him, that is to say, the contents which matches
with the information needs of the user.
Most of the models of information retrieval use a specific data
structure to index a corpus which is called "inverted file" or "reverse
index".
This inverted file collects information on all terms over the corpus
documents specifying the identifiers of documents that contain the
term in question, the frequency of each term in the documents of the
corpus, the positions of the occurrences of the word...
In this paper we use an oriented object database (db4o) instead of
the inverted file, that is to say, instead to search a term in the inverted
file, we will search it in the db4o database.
The purpose of this work is to make a comparative study to see if
the oriented object databases may be competing for the inverse index
in terms of access speed and resource consumption using a large
volume of data.
Abstract: Ontologies offer a means for representing and sharing
information in many domains, particularly in complex domains. For
example, it can be used for representing and sharing information
of System Requirement Specification (SRS) of complex systems
like the SRS of ERTMS/ETCS written in natural language. Since
this system is a real-time and critical system, generic ontologies,
such as OWL and generic ERTMS ontologies provide minimal
support for modeling temporal information omnipresent in these SRS
documents. To support the modeling of temporal information, one
of the challenges is to enable representation of dynamic features
evolving in time within a generic ontology with a minimal redesign
of it. The separation of temporal information from other information
can help to predict system runtime operation and to properly design
and implement them. In addition, it is helpful to provide a reasoning
and querying techniques to reason and query temporal information
represented in the ontology in order to detect potential temporal
inconsistencies. To address this challenge, we propose a lightweight
3-layer temporal Quality of Service (QoS) ontology for representing,
reasoning and querying over temporal and non-temporal information
in a complex domain ontology. Representing QoS entities in separated
layers can clarify the distinction between the non QoS entities
and the QoS entities in an ontology. The upper generic layer of
the proposed ontology provides an intuitive knowledge of domain
components, specially ERTMS/ETCS components. The separation of
the intermediate QoS layer from the lower QoS layer allows us to
focus on specific QoS Characteristics, such as temporal or integrity
characteristics. In this paper, we focus on temporal information that
can be used to predict system runtime operation. To evaluate our
approach, an example of the proposed domain ontology for handover
operation, as well as a reasoning rule over temporal relations in this
domain-specific ontology, are presented.
Abstract: Search engine plays an important role in internet, to
retrieve the relevant documents among the huge number of web
pages. However, it retrieves more number of documents, which are
all relevant to your search topics. To retrieve the most meaningful
documents related to search topics, ranking algorithm is used in
information retrieval technique. One of the issues in data miming is
ranking the retrieved document. In information retrieval the ranking
is one of the practical problems. This paper includes various Page
Ranking algorithms, page segmentation algorithms and compares
those algorithms used for Information Retrieval. Diverse Page Rank
based algorithms like Page Rank (PR), Weighted Page Rank (WPR),
Weight Page Content Rank (WPCR), Hyperlink Induced Topic
Selection (HITS), Distance Rank, Eigen Rumor, Distance Rank Time
Rank, Tag Rank, Relational Based Page Rank and Query Dependent
Ranking algorithms are discussed and compared.