Abstract: In article are analyzed value of audiovisual sources which possesses high integrative potential and allows studying movement of information in the history - information movement from generation to the generation, in essence providing continuity of historical development and inheritance of traditions. Information thus fixed in them is considered as a source not only about last condition of society, but also significant for programming of its subsequent activity.
Abstract: When the profile information of an existing road is
missing or not up-to-date and the parameters of the vertical
alignment are needed for engineering analysis, the engineer has to recreate
the geometric design features of the road alignment using
collected profile data. The profile data may be collected using
traditional surveying methods, global positioning systems, or digital
imagery. This paper develops a method that estimates the parameters
of the geometric features that best characterize the existing vertical
alignments in terms of tangents and the expressions of the curve, that
may be symmetrical, asymmetrical, reverse, and complex vertical
curves. The method is implemented using an Excel-based
optimization method that minimizes the differences between the
observed profile and the profiles estimated from the equations of the
vertical curve. The method uses a 'wireframe' representation of the
profile that makes the proposed method applicable to all types of
vertical curves. A secondary contribution of this paper is to introduce
the properties of the equal-arc asymmetrical curve that has been
recently developed in the highway geometric design field.
Abstract: In this paper we will develop further the sequential life test approach presented in a previous article by [1] using an underlying two parameter Inverse Weibull sampling distribution. The location parameter or minimum life will be considered equal to zero. Once again we will provide rules for making one of the three possible decisions as each observation becomes available; that is: accept the null hypothesis H0; reject the null hypothesis H0; or obtain additional information by making another observation. The product being analyzed is a new electronic component. There is little information available about the possible values the parameters of the corresponding Inverse Weibull underlying sampling distribution could have.To estimate the shape and the scale parameters of the underlying Inverse Weibull model we will use a maximum likelihood approach for censored failure data. A new example will further develop the proposed sequential life testing approach.
Abstract: This paper proposes a technique to protect against
email bombing. The technique employs a statistical approach, Naïve
Bayes (NB), and Neural Networks to show that it is possible to
differentiate between good and bad traffic to protect against email
bombing attacks. Neural networks and Naïve Bayes can be trained
by utilizing many email messages that include both input and output
data for legitimate and non-legitimate emails. The input to the model
includes the contents of the body of the messages, the subject, and
the headers. This information will be used to determine if the email
is normal or an attack email. Preliminary tests suggest that Naïve
Bayes can be trained to produce an accurate response to confirm
which email represents an attack.
Abstract: Lurking behavior is common in information-seeking oriented communities. Transferring users with lurking behavior to be contributors can assist virtual communities to obtain competitive advantages. Based on the ecological cognition framework, this study proposes a model to examine the antecedents of lurking behavior in information-seeking oriented virtual communities. This study argues desire for emotional support, desire for information support, desire for performance-approach, desire for performance -avoidance, desire for mastery-approach, desire for mastery-avoidance, desire for ability trust, desire for benevolence trust, and desire for integrity trust effect on lurking behavior. This study offers an approach to understanding the determinants of lurking behavior in online contexts.
Abstract: Thailand is the agriculture country as the weather and geography are suitable for agriculture environment. In 2011, the quantity of exported fresh vegetable was 126,069 tons which valued 117.1 million US dollars. Although the fresh vegetable has a high potential in exporting, there also have a lack of knowledge such as chemical usage, land usage, marketing and also the transportation and logistics. Nakorn Pathom province is the area which the farmer and manufacturer of fresh vegetable located. The objectives of this study are to study the basic information of the local fresh vegetable farmers in Nakorn Pathom province, to study the factor which effects the management of the fresh vegetable supply chain in Nakorn Pathom province and to study the problems and obstacle of the fresh vegetable supply chain in Nakorn Pathom province. This study is limited to the flow of the Nakorn Pathom province fresh vegetable from the farmers to the country which import the vegetable from Thailand. The populations of this study are 100 local farmers in Nakorn Pathom province. The result of this study shows that the key process of the fresh vegetable supply chain is in the supply sourcing process and manufacturing process.
Abstract: We proposes a way of removing noises and reducing the number of colors contained in a JPEG image. Main purpose of this project is to convert color images to monochrome images for the color blinds. We treat the crispy color images like the Tokyo subway map. Each color in the image has an important information. But for the color blinds, similar colors cannot be distinguished. If we can convert those colors to different gray values, they can distinguish them.
Abstract: Web 2.0 (social networking, blogging and online
forums) can serve as a data source for social science research because
it contains vast amount of information from many different users.
The volume of that information has been growing at a very high rate
and becoming a network of heterogeneous data; this makes things
difficult to find and is therefore not almost useful. We have proposed
a novel theoretical model for gathering and processing data from
Web 2.0, which would reflect semantic content of web pages in
better way. This article deals with the analysis part of the model and
its usage for content analysis of blogs. The introductory part of the
article describes methodology for the gathering and processing data
from blogs. The next part of the article is focused on the evaluation
and content analysis of blogs, which write about specific trend.
Abstract: We propose a multi-agent based utilitarian approach
to model and understand information flows in social networks that
lead to Pareto optimal informational exchanges. We model the
individual expected utility function of the agents to reflect the net
value of information received. We show how this model, adapted
from a theorem by Karl Borch dealing with an actuarial Risk
Exchange concept in the Insurance industry, can be used for social
network analysis. We develop a utilitarian framework that allows us
to interpret Pareto optimal exchanges of value as potential
information flows, while achieving a maximization of a sum of
expected utilities of information of the group of agents. We examine
some interesting conditions on the utility function under which the
flows are optimal. We illustrate the promise of this new approach to
attach economic value to information in networks with a synthetic
example.
Abstract: Efficient utilization of existing water is a pressing
need for Pakistan. Due to rising population, reduction in present
storage capacity and poor delivery efficiency of 30 to 40% from
canal. A study to evaluate an irrigation system in the cotton-wheat
zone of Pakistan, after the watercourse lining was conducted. The
study is made on the basis of cropping pattern and salinity to evaluate
the system. This study employed an index-based approach of using
Geographic information system with field data. The satellite images
of different years were use to examine the effective area. Several
combinations of the ratio of signals received in different spectral
bands were used for development of this index. Near Infrared and
Thermal IR spectral bands proved to be most effective as this
combination helped easy detection of salt affected area and cropping
pattern of the study area. Result showed that 9.97% area under
salinity in 1992, 9.17% in 2000 and it left 2.29% in year 2005.
Similarly in 1992, 45% area is under vegetation it improves to 56%
and 65% in 2000 and 2005 respectively. On the basis of these results
evaluation is done 30% performance is increase after the watercourse
improvement.
Abstract: Context awareness is a capability whereby mobile
computing devices can sense their physical environment and adapt
their behavior accordingly. The term context-awareness, in
ubiquitous computing, was introduced by Schilit in 1994 and has
become one of the most exciting concepts in early 21st-century
computing, fueled by recent developments in pervasive computing
(i.e. mobile and ubiquitous computing). These include computing
devices worn by users, embedded devices, smart appliances, sensors
surrounding users and a variety of wireless networking technologies.
Context-aware applications use context information to adapt
interfaces, tailor the set of application-relevant data, increase the
precision of information retrieval, discover services, make the user
interaction implicit, or build smart environments. For example: A
context aware mobile phone will know that the user is currently in a
meeting room, and reject any unimportant calls. One of the major
challenges in providing users with context-aware services lies in
continuously monitoring their contexts based on numerous sensors
connected to the context aware system through wireless
communication. A number of context aware frameworks based on
sensors have been proposed, but many of them have neglected the
fact that monitoring with sensors imposes heavy workloads on
ubiquitous devices with limited computing power and battery. In this
paper, we present CALEEF, a lightweight and energy efficient
context aware framework for resource limited ubiquitous devices.
Abstract: Knowledge discovery from text and ontology learning
are relatively new fields. However their usage is extended in many
fields like Information Retrieval (IR) and its related domains. Human
Plausible Reasoning based (HPR) IR systems for example need a
knowledge base as their underlying system which is currently made
by hand. In this paper we propose an architecture based on ontology
learning methods to automatically generate the needed HPR
knowledge base.
Abstract: In this paper the General Game problem is described.
In this problem the competition or cooperation dilemma occurs as the
two basic types of strategies. The strategy possibilities have been
analyzed for finding winning strategy in uncertain situations (no
information about the number of players and their strategy types).
The winning strategy is missing, but a good solution can be found by
simulation by varying the ratio of the two types of strategies. This
new method has been used in a real contest with human players,
where the created strategies by simulation have reached very good
ranks. This construction can be applied in other real social games as
well.
Abstract: Social bookmarking is an environment in which
the user gradually changes interests over time so that the tag
data associated with the current temporal period is usually more
important than tag data temporally far from the current period.
This implies that in the social tagging system, the newly tagged
items by the user are more relevant than older items. This study
proposes a novel recommender system that considers the users-
recent tag preferences. The proposed system includes the
following stages: grouping similar users into clusters using an
E-M clustering algorithm, finding similar resources based on
the user-s bookmarks, and recommending the top-N items to
the target user. The study examines the system-s information
retrieval performance using a dataset from del.icio.us, which is
a famous social bookmarking web site. Experimental results
show that the proposed system is better and more effective than
traditional approaches.
Abstract: With the extensive inclusion of document, especially
text, in the business systems, data mining does not cover the full
scope of Business Intelligence. Data mining cannot deliver its impact
on extracting useful details from the large collection of unstructured
and semi-structured written materials based on natural languages.
The most pressing issue is to draw the potential business intelligence
from text. In order to gain competitive advantages for the business, it
is necessary to develop the new powerful tool, text mining, to expand
the scope of business intelligence.
In this paper, we will work out the strong points of text mining in
extracting business intelligence from huge amount of textual
information sources within business systems. We will apply text
mining to each stage of Business Intelligence systems to prove that
text mining is the powerful tool to expand the scope of BI. After
reviewing basic definitions and some related technologies, we will
discuss the relationship and the benefits of these to text mining. Some
examples and applications of text mining will also be given. The
motivation behind is to develop new approach to effective and
efficient textual information analysis. Thus we can expand the scope
of Business Intelligence using the powerful tool, text mining.
Abstract: In this paper, we propose an improved 3D star skeleton
technique, which is a suitable skeletonization for human posture representation
and reflects the 3D information of human posture.
Moreover, the proposed technique is simple and then can be performed
in real-time. The existing skeleton construction techniques, such as
distance transformation, Voronoi diagram, and thinning, focus on the
precision of skeleton information. Therefore, those techniques are not
applicable to real-time posture recognition since they are computationally
expensive and highly susceptible to noise of boundary. Although
a 2D star skeleton was proposed to complement these problems,
it also has some limitations to describe the 3D information of the
posture. To represent human posture effectively, the constructed skeleton
should consider the 3D information of posture. The proposed 3D
star skeleton contains 3D data of human, and focuses on human action
and posture recognition. Our 3D star skeleton uses the 8 projection
maps which have 2D silhouette information and depth data of human
surface. And the extremal points can be extracted as the features of 3D
star skeleton, without searching whole boundary of object. Therefore,
on execution time, our 3D star skeleton is faster than the “greedy" 3D
star skeleton using the whole boundary points on the surface. Moreover,
our method can offer more accurate skeleton of posture than the
existing star skeleton since the 3D data for the object is concerned.
Additionally, we make a codebook, a collection of representative 3D
star skeletons about 7 postures, to recognize what posture of constructed
skeleton is.
Abstract: This study analyzed the creativity of student teams
participating in an exploratory information system development
project (ISDP) and examined antecedents of their creativity. By using
partial least squares (PLS) to analyze a sample of thirty-six teams
enrolled in an information system department project training course
that required three semesters of project-based lessons, the results
found social capitals (structural, relational and cognitive social capital)
positively influence knowledge integration. However, relational social
capital does not significantly influence knowledge integration.
Knowledge integration positively affects team creativity. This study
also demonstrated that social capitals significantly influence team
creativity through knowledge integration. The implications of our
findings for future research are discussed.
Abstract: All the available algorithms for blind estimation namely constant modulus algorithm (CMA), Decision-Directed Algorithm (DDA/DFE) suffer from the problem of convergence to local minima. Also, if the channel drifts considerably, any DDA looses track of the channel. So, their usage is limited in varying channel conditions. The primary limitation in such cases is the requirement of certain overhead bits in the transmit framework which leads to wasteful use of the bandwidth. Also such arrangements fail to use channel state information (CSI) which is an important aid in improving the quality of reception. In this work, the main objective is to reduce the overhead imposed by the pilot symbols, which in effect reduces the system throughput. Also we formulate an arrangement based on certain dynamic Artificial Neural Network (ANN) topologies which not only contributes towards the lowering of the overhead but also facilitates the use of the CSI. A 2×2 Multiple Input Multiple Output (MIMO) system is simulated and the performance variation with different channel estimation schemes are evaluated. A new semi blind approach based on dynamic ANN is proposed for channel tracking in varying channel conditions and the performance is compared with perfectly known CSI and least square (LS) based estimation.
Abstract: Transmission and distribution lines are vital links between the generating unit and consumers. They are exposed to atmosphere, hence chances of occurrence of fault in transmission line is very high which has to be immediately taken care of in order to minimize damage caused by it. In this paper Discrete wavelet transform of voltage signals at the two ends of transmission lines have been analyzed. The transient energy of the detail information of level five is calculated for different fault conditions. It is observed that the variation of transient energy of healthy and faulted line can give important information which can be very useful in classifying and locating the fault.
Abstract: This paper aims at a new challenge of customer
satisfaction on mobile customer relationship management. In this
paper presents a conceptualization of mCRM on its unique
characteristics of customer satisfaction. Also, this paper develops an
empirical framework in conception of customer satisfaction in
mCRM. A single-case study is applied as the methodology. In order to
gain an overall view of the empirical case, this paper accesses to
invisible and important information of company in this investigation.
Interview is the key data source form the main informants of the
company through which the issues are identified and the proposed
framework is built. It supports the development of customer
satisfaction in mCRM; links this theoretical framework into practice;
and provides the direction for future research. Therefore, this paper is
very useful for the industries as it helps them to understand how
customer satisfaction changes the mCRM structure and increase the
business competitive advantage. Finally, this paper provides a
contribution in practice by linking a theoretical framework in
conception of customer satisfaction in mCRM for companies to a
practical real case.