Abstract: Petrol Fuel Station (PFS) has potential hazards to the
people, asset, environment and reputation of an operating company.
Fire hazards, static electricity air pollution evoked by aliphatic and
aromatic organic compounds are major causes of accident/incident
occurrence at fuel station. Activities such as carelessness,
maintenance, housekeeping, slips trips and falls, transportation
hazard, major and minor injuries, robbery and snake bites has a
potential to create unsafe conditions. The level of risk of these
hazards varies according to location and country. The emphasis on
safety considerations by the government is variable all around the
world. Developed countries safety records are much better as
compared to developing countries safety statistics. There is no
significant approach available to highlight the unsafe acts and unsafe
conditions during operation and maintenance of fuel station. Fuel
station is the most commonly available facilities that contain
flammable and hazardous materials. Due to continuous operation of
fuel station they pose various hazards to people, environment and
assets of an organization. To control these hazards, there is a need for
specific approach. PFS operation is unique as compared to other
businesses. For smooth operations it demands an involvement of
operating company, contractor and operator group. This study will
focus to address hazard contributing factors that have a potential to
make PFS operation risky. One year data collected, 902 activities
analyzed, comparisons were made to highlight significant
contributing factors. The study will provide help and assistance to
PFS outlet marketing companies to make their fuel station operation
safer. It will help health safety and environment (HSE) professionals
to arrest the gap available related to safety matters at PFS.
Abstract: The main purpose of this research is the calculation of implicit prices of the environmental level of air quality in the city of Moscow on the basis of housing property prices. The database used contains records of approximately 20 thousand apartments and has been provided by a leading real estate agency operating in Russia. The explanatory variables include physical characteristics of the houses, environmental (industry emissions), neighbourhood sociodemographic and geographic data: GPS coordinates of each house. The hedonic regression results for ecological variables show «negative» prices while increasing the level of air contamination from such substances as carbon monoxide, nitrogen dioxide, sulphur dioxide, and particles (CO, NO2, SO2, TSP). The marginal willingness to pay for higher environmental quality is presented for linear and log-log models.
Abstract: The customary practice of identifying industrial sickness is a set traditional techniques which rely upon a range of manual monitoring and compilation of financial records. It makes the process tedious, time consuming and often are susceptible to manipulation. Therefore, certain readily available tools are required which can deal with such uncertain situations arising out of industrial sickness. It is more significant for a country like India where the fruits of development are rarely equally distributed. In this paper, we propose an approach based on Artificial Neural Network (ANN) to deal with industrial sickness with specific focus on a few such units taken from a less developed north-east (NE) Indian state like Assam. The proposed system provides decision regarding industrial sickness using eight different parameters which are directly related to the stages of sickness of such units. The mechanism primarily uses certain signals and symptoms of industrial health to decide upon the state of a unit. Specifically, we formulate an ANN based block with data obtained from a few selected units of Assam so that required decisions related to industrial health could be taken. The system thus formulated could become an important part of planning and development. It can also contribute towards computerization of decision support systems related to industrial health and help in better management.
Abstract: The objective of this study was to develop safety practices which is suitable for Thai industrial operators from Incident and Injury Free, IIF to create safety behavior and reduce the un-safe records in the petroleum industry. A number of 310 technicians i.e., 295 males and 15 females, in service maintenance section participated in this program. The safety attitude level and safety behavior level for pre-attended and post-attended the developed safety practices of the technicians were evaluated using questionnaire procedure and on-site observation. After applied the developed practice program, both of the safety attitudes and safety behavior were increased to be at very good level and good level, respectively. Evaluating the follow-up unsafe records, it was found that the injury was reduced from 0.11 to 0 case/month, the medical treatment case was reduced from 0.22 to 0 case/month and the first aid case was reduced from 1 to 0.33 case/month. The developed safety working practice was successfully implemented to Thai industrial operators.
Abstract: This paper presents the methodology from machine
learning approaches for short-term rain forecasting system. Decision
Tree, Artificial Neural Network (ANN), and Support Vector Machine
(SVM) were applied to develop classification and prediction models
for rainfall forecasts. The goals of this presentation are to
demonstrate (1) how feature selection can be used to identify the
relationships between rainfall occurrences and other weather
conditions and (2) what models can be developed and deployed for
predicting the accurate rainfall estimates to support the decisions to
launch the cloud seeding operations in the northeastern part of
Thailand. Datasets collected during 2004-2006 from the
Chalermprakiat Royal Rain Making Research Center at Hua Hin,
Prachuap Khiri khan, the Chalermprakiat Royal Rain Making
Research Center at Pimai, Nakhon Ratchasima and Thai
Meteorological Department (TMD). A total of 179 records with 57
features was merged and matched by unique date. There are three
main parts in this work. Firstly, a decision tree induction algorithm
(C4.5) was used to classify the rain status into either rain or no-rain.
The overall accuracy of classification tree achieves 94.41% with the
five-fold cross validation. The C4.5 algorithm was also used to
classify the rain amount into three classes as no-rain (0-0.1 mm.),
few-rain (0.1- 10 mm.), and moderate-rain (>10 mm.) and the overall
accuracy of classification tree achieves 62.57%. Secondly, an ANN
was applied to predict the rainfall amount and the root mean square
error (RMSE) were used to measure the training and testing errors of
the ANN. It is found that the ANN yields a lower RMSE at 0.171 for
daily rainfall estimates, when compared to next-day and next-2-day
estimation. Thirdly, the ANN and SVM techniques were also used to
classify the rain amount into three classes as no-rain, few-rain, and
moderate-rain as above. The results achieved in 68.15% and 69.10%
of overall accuracy of same-day prediction for the ANN and SVM
models, respectively. The obtained results illustrated the comparison
of the predictive power of different methods for rainfall estimation.
Abstract: There is a general feeling that Internet crime is an
advanced type of crime that has not yet infiltrated developing
countries like Uganda. The carefree nature of the Internet in which
anybody publishes anything at anytime poses a serious security threat
for any nation. Unfortunately, there are no formal records about this
type of crime for Uganda. Could this mean that it does not exist
there? The author conducted an independent research to ascertain
whether cyber crimes have affected people in Uganda and if so, to
discover where they are reported. This paper highlights the findings.
Abstract: Saudi Arabia is an arid country which depends on
costly desalination plants to satisfy the growing residential water
demand. Prediction of water demand is usually a challenging task
because the forecast model should consider variations in economic
progress, climate conditions and population growth. The task is
further complicated knowing that Mecca city is visited regularly by
large numbers during specific months in the year due to religious
occasions. In this paper, a neural networks model is proposed to
handle the prediction of the monthly and yearly water demand for
Mecca city, Saudi Arabia. The proposed model will be developed
based on historic records of water production and estimated visitors-
distribution. The driving variables for the model include annuallyvarying
variables such as household income, household density, and
city population, and monthly-varying variables such as expected
number of visitors each month and maximum monthly temperature.
Abstract: This paper presents the possibilities of using Weibull statistical distribution in modeling the distribution of defects in ERP systems. There follows a case study, which examines helpdesk records of defects that were reported as the result of one ERP subsystem upgrade. The result of the applied modeling is in modeling the reliability of the ERP system from a user perspective with estimated parameters like expected maximum number of defects in one day or predicted minimum of defects between two upgrades. Applied measurement-based analysis framework is proved to be suitable in predicting future states of the reliability of the observed ERP subsystems.
Abstract: The data is available in abundance in any business
organization. It includes the records for finance, maintenance,
inventory, progress reports etc. As the time progresses, the data keep
on accumulating and the challenge is to extract the information from
this data bank. Knowledge discovery from these large and complex
databases is the key problem of this era. Data mining and machine
learning techniques are needed which can scale to the size of the
problems and can be customized to the application of business. For
the development of accurate and required information for particular
problem, business analyst needs to develop multidimensional models
which give the reliable information so that they can take right
decision for particular problem. If the multidimensional model does
not possess the advance features, the accuracy cannot be expected.
The present work involves the development of a Multidimensional
data model incorporating advance features. The criterion of
computation is based on the data precision and to include slowly
change time dimension. The final results are displayed in graphical
form.
Abstract: This paper describes a new supervised fusion (hybrid)
electrocardiogram (ECG) classification solution consisting of a new
QRS complex geometrical feature extraction as well as a new version
of the learning vector quantization (LVQ) classification algorithm
aimed for overcoming the stability-plasticity dilemma. Toward this
objective, after detection and delineation of the major events of ECG
signal via an appropriate algorithm, each QRS region and also its
corresponding discrete wavelet transform (DWT) are supposed as
virtual images and each of them is divided into eight polar sectors.
Then, the curve length of each excerpted segment is calculated
and is used as the element of the feature space. To increase the
robustness of the proposed classification algorithm versus noise,
artifacts and arrhythmic outliers, a fusion structure consisting of
five different classifiers namely as Support Vector Machine (SVM),
Modified Learning Vector Quantization (MLVQ) and three Multi
Layer Perceptron-Back Propagation (MLP–BP) neural networks with
different topologies were designed and implemented. The new proposed
algorithm was applied to all 48 MIT–BIH Arrhythmia Database
records (within–record analysis) and the discrimination power of the
classifier in isolation of different beat types of each record was
assessed and as the result, the average accuracy value Acc=98.51%
was obtained. Also, the proposed method was applied to 6 number
of arrhythmias (Normal, LBBB, RBBB, PVC, APB, PB) belonging
to 20 different records of the aforementioned database (between–
record analysis) and the average value of Acc=95.6% was achieved.
To evaluate performance quality of the new proposed hybrid learning
machine, the obtained results were compared with similar peer–
reviewed studies in this area.
Abstract: Recent trends in building constructions in Libya are
more toward tall (high-rise) building projects. As a consequence, a
better estimation of the lateral loading in the design process is
becoming the focal of a safe and cost effective building industry. Byin-
large, Libya is not considered a potential earthquake prone zone,
making wind is the dominant design lateral loads. Current design
practice in the country estimates wind speeds on a mere random
bases by considering certain factor of safety to the chosen wind
speed. Therefore, a need for a more accurate estimation of wind
speeds in Libya was the motivation behind this study. Records of
wind speed data were collected from 22 metrological stations in
Libya, and were statistically analysed. The analysis of more than four
decades of wind speed records suggests that the country can be
divided into four zones of distinct wind speeds. A computer “survey"
program was manipulated to draw design wind speeds contour map
for the state of Libya.
The paper presents the statistical analysis of Libya-s recorded
wind speed data and proposes design wind speed values for a 50-year
return period that covers the entire country.
Abstract: Transaction management is one of the most crucial requirements for enterprise application development which often require concurrent access to distributed data shared amongst multiple application / nodes. Transactions guarantee the consistency of data records when multiple users or processes perform concurrent operations. Existing Fault Tolerance Infrastructure for Mobile Agents (FTIMA) provides a fault tolerant behavior in distributed transactions and uses multi-agent system for distributed transaction and processing. In the existing FTIMA architecture, data flows through the network and contains personal, private or confidential information. In banking transactions a minor change in the transaction can cause a great loss to the user. In this paper we have modified FTIMA architecture to ensure that the user request reaches the destination server securely and without any change. We have used triple DES for encryption/ decryption and MD5 algorithm for validity of message.
Abstract: We report on the results of a pilot study in which a data-mining tool was developed for mining audiology records. The records were heterogeneous in that they contained numeric, category and textual data. The tools developed are designed to observe associations between any field in the records and any other field. The techniques employed were the statistical chi-squared test, and the use of self-organizing maps, an unsupervised neural learning approach.
Abstract: Amount of dissolve oxygen in a river has a great direct affect on aquatic macroinvertebrates and this would influence on the region ecosystem indirectly. In this paper it is tried to predict dissolved oxygen in rivers by employing an easy Fuzzy Logic Modeling, Wang Mendel method. This model just uses previous records to estimate upcoming values. For this purpose daily and hourly records of eight stations in Au Sable watershed in Michigan, United States are employed for 12 years and 50 days period respectively. Calculations indicate that for long period prediction it is better to increase input intervals. But for filling missed data it is advisable to decrease the interval. Increasing partitioning of input and output features influence a little on accuracy but make the model too time consuming. Increment in number of input data also act like number of partitioning. Large amount of train data does not modify accuracy essentially, so, an optimum training length should be selected.
Abstract: Electrocardiogram (ECG) data compression algorithm
is needed that will reduce the amount of data to be transmitted, stored
and analyzed, but without losing the clinical information content. A
wavelet ECG data codec based on the Set Partitioning In Hierarchical
Trees (SPIHT) compression algorithm is proposed in this paper. The
SPIHT algorithm has achieved notable success in still image coding.
We modified the algorithm for the one-dimensional (1-D) case and
applied it to compression of ECG data.
By this compression method, small percent root mean square
difference (PRD) and high compression ratio with low
implementation complexity are achieved. Experiments on selected
records from the MIT-BIH arrhythmia database revealed that the
proposed codec is significantly more efficient in compression and in
computation than previously proposed ECG compression schemes.
Compression ratios of up to 48:1 for ECG signals lead to acceptable
results for visual inspection.
Abstract: With the rapid growth in business size, today's businesses orient towards electronic technologies. Amazon.com and e-bay.com are some of the major stakeholders in this regard. Unfortunately the enormous size and hugely unstructured data on the web, even for a single commodity, has become a cause of ambiguity for consumers. Extracting valuable information from such an everincreasing data is an extremely tedious task and is fast becoming critical towards the success of businesses. Web content mining can play a major role in solving these issues. It involves using efficient algorithmic techniques to search and retrieve the desired information from a seemingly impossible to search unstructured data on the Internet. Application of web content mining can be very encouraging in the areas of Customer Relations Modeling, billing records, logistics investigations, product cataloguing and quality management. In this paper we present a review of some very interesting, efficient yet implementable techniques from the field of web content mining and study their impact in the area specific to business user needs focusing both on the customer as well as the producer. The techniques we would be reviewing include, mining by developing a knowledge-base repository of the domain, iterative refinement of user queries for personalized search, using a graphbased approach for the development of a web-crawler and filtering information for personalized search using website captions. These techniques have been analyzed and compared on the basis of their execution time and relevance of the result they produced against a particular search.
Abstract: Internal controls of accounting are an essential
business function for a growth-oriented organization, and include the
elements of risk assessment, information communications and even
employees' roles and responsibilities. Internal controls of accounting
systems are designed to protect a company from fraud, abuse and
inaccurate data recording and help organizations keep track of
essential financial activities. Internal controls of accounting provide a
streamlined solution for organizing all accounting procedures and
ensuring that the accounting cycle is completed consistently and
successfully. Implementing a formal Accounting Procedures Manual
for the organization allows the financial department to facilitate
several processes and maintain rigorous standards. Internal controls
also allow organizations to keep detailed records, manage and
organize important financial transactions and set a high standard for
the organization's financial management structure and protocols. A
well-implemented system also reduces the risk of accounting errors
and abuse. A well-implemented controls system allows a company's
financial managers to regulate and streamline all functions of the
accounting department. Internal controls of accounting can be set up
for every area to track deposits, monitor check handling, keep track
of creditor accounts, and even assess budgets and financial statements
on an ongoing basis. Setting up an effective accounting system to
monitor accounting reports, analyze records and protect sensitive
financial information also can help a company set clear goals and
make accurate projections. Creating efficient accounting processes
allows an organization to set specific policies and protocols on
accounting procedures, and reach its financial objectives on a regular
basis. Internal accounting controls can help keep track of such areas
as cash-receipt recording, payroll management, appropriate recording
of grants and gifts, cash disbursements by authorized personnel, and
the recording of assets. These systems also can take into account any
government regulations and requirements for financial reporting.
Abstract: The amount of the information being churned out by the field of biology has jumped manifold and now requires the extensive use of computer techniques for the management of this information. The predominance of biological information such as protein sequence similarity in the biological information sea is key information for detecting protein evolutionary relationship. Protein sequence similarity typically implies homology, which in turn may imply structural and functional similarities. In this work, we propose, a learning method for detecting remote protein homology. The proposed method uses a transformation that converts protein sequence into fixed-dimensional representative feature vectors. Each feature vector records the sensitivity of a protein sequence to a set of amino acids substrings generated from the protein sequences of interest. These features are then used in conjunction with support vector machines for the detection of the protein remote homology. The proposed method is tested and evaluated on two different benchmark protein datasets and it-s able to deliver improvements over most of the existing homology detection methods.
Abstract: Warehousing is commonly used in factories for the
storage of products until delivery of orders. As the amount of
products stored increases it becomes tedious to be carried out
manually. In recent years, the manual storing has converted into fully
or partially computer controlled systems, also known as Automated
Storage and Retrieval Systems (AS/RS). This paper discusses an
ASRS system, which was designed such that the best storage location
for the products is determined by utilizing a fuzzy control system.
The design maintains the records of the products to be/already in
store and the storage/retrieval times along with the availability status
of the storage locations. This paper discusses on the maintenance of
the above mentioned records and the utilization of the concept of
fuzzy logic in order to determine the optimum storage location for
the products. The paper will further discuss on the dynamic splitting
and merging of the storage locations depending on the product sizes.
Abstract: Many studies have been conducted for derivation of
attenuation relationships worldwide, however few relationships have
been developed to use for the seismic region of Iranian plateau and
only few of these studies have been conducted for derivation of
attenuation relationships for parameters such as uniform duration.
Uniform duration is the total time during which the acceleration is
larger than a given threshold value (default is 5% of PGA). In this
study, the database was same as that used previously by Ghodrati
Amiri et al. (2007) with same correction methods for earthquake
records in Iran. However in this study, records from earthquakes with
MS< 4.0 were excluded from this database, each record has
individually filtered afterward, and therefore the dataset has been
expanded. These new set of attenuation relationships for Iran are
derived based on tectonic conditions with soil classification into rock
and soil. Earthquake parameters were chosen to be
hypocentral distance and magnitude in order to make it easier to use
the relationships for seismic hazard analysis. Tehran is the capital
city of Iran wit ha large number of important structures. In this study,
a probabilistic approach has been utilized for seismic hazard
assessment of this city. The resulting uniform duration against return
period diagrams are suggested to be used in any projects in the area.