Abstract: Innovations not only contribute to competitiveness of
the company but have also positive effects on revenues. On average,
product innovations account to 14 percent of companies’ sales.
Innovation management has substantially changed during the last
decade, because of growing reliance on external partners. As a
consequence, a new task for purchasing arises, as firms need to
understand which suppliers actually do have high potential
contributing to the innovativeness of the firm and which do not.
Proper organization of the purchasing function is important since
for the majority of manufacturing companies deal with substantial
material costs which pass through the purchasing function. In the past
the purchasing function was largely seen as a transaction-oriented,
clerical function but today purchasing is the intermediate with supply
chain partners contributing to innovations, be it product or process
innovations. Therefore, purchasing function has to be organized
differently to enable firm innovation potential.
However, innovations are inherently risky. There are behavioral
risk (that some partner will take advantage of the other party),
technological risk in terms of complexity of products and processes
of manufacturing and incoming materials and finally market risks,
which in fact judge the value of the innovation. These risks are
investigated in this work. Specifically, technological risks which deal
with complexity of the products, and processes will be investigated
more thoroughly. Buying components or such high edge technologies
necessities careful investigation of technical features and therefore is
usually conducted by a team of experts. Therefore it is hypothesized
that higher the technological risk, higher will be the centralization of
the purchasing function as an interface with other supply chain
members.
Main contribution of this research lies is in the fact that analysis
was performed on a large data set of 1493 companies, from 25
countries collected in the GMRG 4 survey. Most analyses of
purchasing function are done by case study analysis of innovative
firms. Therefore this study contributes with empirical evaluations that
can be generalized.
Abstract: In the culture of Thailand, the Yak serve as a mediated
icon representing strength, power, and mystical protection not only
for the Buddha, but for population of worshipers. Originating from
the forests of China, the Yak continues to stand guard at the gates of
Buddhist temples. The Yak represents Thai culture in the hearts of
Thai people. This paper presents a qualitative study regarding the
curious mix of media, culture, and religion that projects the Yak of
Thailand as a larger than life message throughout the political,
cultural, and religious spheres. The gate guardians, or gods as they
are sometimes called, appear throughout the religious temples of
Asian cultures. However, the Asian cultures demonstrate differences
in artistic renditions (or presentations) of such sentinels. Thailand
gate guards (the Yak) stand in front of many Buddhist temples, and
these iconic figures display unique features with varied symbolic
significance. The temple (or wat), plays a vital role in every
community; and, for many people, Thailand’s temples are the
country’s most endearing sights. The authors applied folknography as
a methodology to illustrate the importance of the Thai Yak in serving
as meaningful icons that transcend not only time, but the culture,
religion, and mass media. The Yak represents mythical, religious,
artistic, cultural, and militaristic significance for the Thai people.
Data collection included interviews, focus groups, and natural
observations. This paper summarizes the perceptions of the Thai
people concerning their gate sentries and the relationship,
communication, connection, and the enduring respect that Thai
people hold for their guardians of the gates.
Abstract: The systematic evaluation of manufacturing
technologies with regard to the potential for product designing
constitutes a major challenge. Until now, conventional evaluation
methods primarily consider the costs of manufacturing technologies.
Thus, the potential of manufacturing technologies for achieving
additional product design features is not completely captured. To
compensate this deficit, final evaluations of new technologies are
mainly intuitive in practice. Therefore, an additional evaluation
dimension is needed which takes the potential of manufacturing
technologies for specific realizable product designs into account. In
this paper, we present the approach of an evaluation method for
selecting manufacturing technologies with regard to their potential
for product designing. This research is done within the Fraunhofer
innovation cluster »AdaM« (Adaptive Manufacturing) which targets
the development of resource efficient and adaptive manufacturing
technology processes for complex turbomachinery components.
Abstract: Health analytics (HA) is used in healthcare systems
for effective decision making, management and planning of
healthcare and related activities. However, user resistances, unique
position of medical data content and structure (including
heterogeneous and unstructured data) and impromptu HA projects
have held up the progress in HA applications. Notably, the accuracy
of outcomes depends on the skills and the domain knowledge of the
data analyst working on the healthcare data. Success of HA depends
on having a sound process model, effective project management and
availability of supporting tools. Thus, to overcome these challenges
through an effective process model, we propose a HA process model
with features from rational unified process (RUP) model and agile
methodology.
Abstract: Different order modulations combined with different
coding schemes, allow sending more bits per symbol, thus achieving
higher throughputs and better spectral efficiencies. However, it must
also be noted that when using a modulation technique such as 64-
QAM with less overhead bits, better signal-to-noise ratios (SNRs) are
needed to overcome any Inter symbol Interference (ISI) and maintain
a certain bit error ratio (BER). The use of adaptive modulation allows
wireless technologies to yielding higher throughputs while also
covering long distances. The aim of this paper is to implement an
Adaptive Modulation and Coding (AMC) features of the WiMAX
PHY in MATLAB and to analyze the performance of the system in
different channel conditions (AWGN, Rayleigh and Rician fading
channel) with channel estimation and blind equalization. Simulation
results have demonstrated that the increment in modulation order
causes to increment in throughput and BER values. These results
derived a trade-off among modulation order, FFT length, throughput,
BER value and spectral efficiency. The BER changes gradually for
AWGN channel and arbitrarily for Rayleigh and Rician fade
channels.
Abstract: Load Forecasting plays a key role in making today's
and future's Smart Energy Grids sustainable and reliable. Accurate
power consumption prediction allows utilities to organize in advance
their resources or to execute Demand Response strategies more
effectively, which enables several features such as higher
sustainability, better quality of service, and affordable electricity
tariffs. It is easy yet effective to apply Load Forecasting at larger
geographic scale, i.e. Smart Micro Grids, wherein the lower available
grid flexibility makes accurate prediction more critical in Demand
Response applications. This paper analyses the application of
short-term load forecasting in a concrete scenario, proposed within the
EU-funded GreenCom project, which collect load data from single
loads and households belonging to a Smart Micro Grid. Three
short-term load forecasting techniques, i.e. linear regression, artificial
neural networks, and radial basis function network, are considered,
compared, and evaluated through absolute forecast errors and training
time. The influence of weather conditions in Load Forecasting is also
evaluated. A new definition of Gain is introduced in this paper, which
innovatively serves as an indicator of short-term prediction
capabilities of time spam consistency. Two models, 24- and
1-hour-ahead forecasting, are built to comprehensively compare these
three techniques.
Abstract: In this research work, neural networks were applied to
classify two types of hip joint implants based on the relative hip joint
implant side speed and three components of each ground reaction
force. The condition of walking gait at normal velocity was used and
carried out with each of the two hip joint implants assessed. Ground
reaction forces’ kinetic temporal changes were considered in the first
approach followed but discarded in the second one. Ground reaction
force components were obtained from eighteen patients under such
gait condition, half of which had a hip implant type I-II, whilst the
other half had the hip implant, defined as type III by Orthoload®.
After pre-processing raw gait kinetic data and selecting the time
frames needed for the analysis, the ground reaction force components
were used to train a MLP neural network, which learnt to distinguish
the two hip joint implants in the abovementioned condition. Further
to training, unknown hip implant side and ground reaction force
components were presented to the neural networks, which assigned
those features into the right class with a reasonably high accuracy for
the hip implant type I-II and the type III. The results suggest that
neural networks could be successfully applied in the performance
assessment of hip joint implants.
Abstract: Over the past era, there have been a lot of efforts and
studies are carried out in growing proficient tools for performing
various tasks in big data. Recently big data have gotten a lot of
publicity for their good reasons. Due to the large and complex
collection of datasets it is difficult to process on traditional data
processing applications. This concern turns to be further mandatory
for producing various tools in big data. Moreover, the main aim of
big data analytics is to utilize the advanced analytic techniques
besides very huge, different datasets which contain diverse sizes from
terabytes to zettabytes and diverse types such as structured or
unstructured and batch or streaming. Big data is useful for data sets
where their size or type is away from the capability of traditional
relational databases for capturing, managing and processing the data
with low-latency. Thus the out coming challenges tend to the
occurrence of powerful big data tools. In this survey, a various
collection of big data tools are illustrated and also compared with the
salient features.
Abstract: Opportunistic routing is used, where the network has
the features like dynamic topology changes and intermittent network
connectivity. In Delay tolerant network or Disruption tolerant
network opportunistic forwarding technique is widely used. The key
idea of opportunistic routing is selecting forwarding nodes to forward
data packets and coordination among these nodes to avoid duplicate
transmissions. This paper gives the analysis of pros and cons of
various opportunistic routing techniques used in MANET.
Abstract: The system is designed to show images which are
related to the query image. Extracting color, texture, and shape
features from an image plays a vital role in content-based image
retrieval (CBIR). Initially RGB image is converted into HSV color
space due to its perceptual uniformity. From the HSV image, Color
features are extracted using block color histogram, texture features
using Haar transform and shape feature using Fuzzy C-means
Algorithm. Then, the characteristics of the global and local color
histogram, texture features through co-occurrence matrix and Haar
wavelet transform and shape are compared and analyzed for CBIR.
Finally, the best method of each feature is fused during similarity
measure to improve image retrieval effectiveness and accuracy.
Abstract: Anultra-low power capacitor less low-dropout voltage
regulator with improved transient response using gain enhanced feed
forward path compensation is presented in this paper. It is based on a
cascade of a voltage amplifier and a transconductor stage in the feed
forward path with regular error amplifier to form a composite gainenhanced
feed forward stage. It broadens the gain bandwidth and thus
improves the transient response without substantial increase in power
consumption. The proposed LDO, designed for a maximum output
current of 100 mA in UMC 180 nm, requires a quiescent current of
69 )A. An undershot of 153.79mV for a load current changes from
0mA to 100mA and an overshoot of 196.24mV for current change of
100mA to 0mA. The settling time is approximately 1.1 )s for the
output voltage undershooting case. The load regulation is of 2.77
)V/mA at load current of 100mA. Reference voltage is generated by
using an accurate band gap reference circuit of 0.8V.The costly
features of SOC such as total chip area and power consumption is
drastically reduced by the use of only a total compensation
capacitance of 6pF while consuming power consumption of 0.096
mW.
Abstract: The most important part of modern lean low NOx combustors is a premixer where swirlers are often used for intensification of mixing processes and further formation of required flow pattern in combustor liner. Swirling flow leads to formation of complex eddy structures causing flow perturbations. It is able to cause combustion instability. Therefore, at design phase, it is necessary to pay great attention to aerodynamics of premixers. Analysis based on unsteady CFD modeling of swirling flow in production combustor swirler showed presence of large number of different eddy structures that can be conditionally divided into three types relative to its location of origin and a propagation path. Further, features of each eddy type were subsequently defined. Comparison of calculated and experimental pressure fluctuations spectrums verified correctness of computations.
Abstract: The article is devoted to the problem of political
discourse and its reflection on mass cognition. This article is
dedicated to describe the myth as one of the main features of political
discourse. The dominance of an expressional and emotional
component in the myth is shown. Precedent phenomenon plays an
important role in distinguishing the myth from the linguistic point of
view. Precedent phenomena show the linguistic cognition, which is
characterized by their fame and recognition. Four types of myths
such as master myths, a foundation myth, sustaining myth,
eschatological myths are observed. The myths about the national idea
are characterized by national specificity. The main aim of the
political discourse with the help of myths is to influence on the mass
consciousness in order to motivate the addressee to certain actions so
that the target purpose is reached owing to unity of forces.
Abstract: This research proposes a novel reconstruction protocol
for restoring missing surfaces and low-quality edges and shapes in
photos of artifacts at historical sites. The protocol starts with the
extraction of a cloud of points. This extraction process is based on
four subordinate algorithms, which differ in the robustness and
amount of resultant. Moreover, they use different -but
complementary- accuracy to some related features and to the way
they build a quality mesh. The performance of our proposed protocol
is compared with other state-of-the-art algorithms and toolkits. The
statistical analysis shows that our algorithm significantly outperforms
its rivals in the resultant quality of its object files used to reconstruct
the desired model.
Abstract: In this study, a comparative analysis of the approaches
associated with the use of neural network algorithms for effective
solution of a complex inverse problem – the problem of identifying
and determining the individual concentrations of inorganic salts in
multicomponent aqueous solutions by the spectra of Raman
scattering of light – is performed. It is shown that application of
artificial neural networks provides the average accuracy of
determination of concentration of each salt no worse than 0.025 M.
The results of comparative analysis of input data compression
methods are presented. It is demonstrated that use of uniform
aggregation of input features allows decreasing the error of
determination of individual concentrations of components by 16-18%
on the average.
Abstract: Consumer-to-Consumer (C2C) E-commerce has been
growing at a very high speed in recent years. Since identical or
nearly-same kinds of products compete one another by relying on
keyword search in C2C E-commerce, some sellers describe their
products with spam keywords that are popular but are not related to
their products. Though such products get more chances to be retrieved
and selected by consumers than those without spam keywords,
the spam keywords mislead the consumers and waste their time.
This problem has been reported in many commercial services like
ebay and taobao, but there have been little research to solve this
problem. As a solution to this problem, this paper proposes a method
to classify whether keywords of a product are spam or not. The
proposed method assumes that a keyword for a given product is
more reliable if the keyword is observed commonly in specifications
of products which are the same or the same kind as the given
product. This is because that a hierarchical category of a product
in general determined precisely by a seller of the product and so is
the specification of the product. Since higher layers of the hierarchical
category represent more general kinds of products, a reliable degree
is differently determined according to the layers. Hence, reliable
degrees from different layers of a hierarchical category become
features for keywords and they are used together with features only
from specifications for classification of the keywords. Support Vector
Machines are adopted as a basic classifier using the features, since
it is powerful, and widely used in many classification tasks. In
the experiments, the proposed method is evaluated with a golden
standard dataset from Yi-han-wang, a Chinese C2C E-commerce,
and is compared with a baseline method that does not consider
the hierarchical category. The experimental results show that the
proposed method outperforms the baseline in F1-measure, which
proves that spam keywords are effectively identified by a hierarchical
category in C2C E-commerce.
Abstract: In this paper the issue of dimensionality reduction is
investigated in finger vein recognition systems using kernel Principal
Component Analysis (KPCA). One aspect of KPCA is to find the
most appropriate kernel function on finger vein recognition as there
are several kernel functions which can be used within PCA-based
algorithms. In this paper, however, another side of PCA-based
algorithms -particularly KPCA- is investigated. The aspect of
dimension of feature vector in PCA-based algorithms is of
importance especially when it comes to the real-world applications
and usage of such algorithms. It means that a fixed dimension of
feature vector has to be set to reduce the dimension of the input and
output data and extract the features from them. Then a classifier is
performed to classify the data and make the final decision. We
analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in
this paper and investigate the optimal feature extraction dimension in
finger vein recognition using KPCA.
Abstract: We present our approach on using continuous delivery
pattern for release management. One of the key practices of agile and
lean teams is the continuous delivery of new features to stakeholders.
The main benefits of this approach lie in the ability to release new
applications rapidly which has real strategic impact on the
competitive advantage of an organization. Organizations that
successfully implement Continuous Delivery have the ability to
evolve rapidly to support innovation, provide stable and reliable
software in more efficient ways, decrease the amount of resources
need for maintenance, and lower the software delivery time and costs.
One of the objectives of this paper is to elaborate a case study where
IT division of Central Securities Depository Institution (MKK) of
Turkey apply Continuous Delivery pattern to improve release
management process.
Abstract: An extensive amount of work has been done in data
clustering research under the unsupervised learning technique in Data
Mining during the past two decades. Moreover, several approaches
and methods have been emerged focusing on clustering diverse data
types, features of cluster models and similarity rates of clusters.
However, none of the single clustering algorithm exemplifies its best
nature in extracting efficient clusters. Consequently, in order to
rectify this issue, a new challenging technique called Cluster
Ensemble method was bloomed. This new approach tends to be the
alternative method for the cluster analysis problem. The main
objective of the Cluster Ensemble is to aggregate the diverse
clustering solutions in such a way to attain accuracy and also to
improve the eminence the individual clustering algorithms. Due to
the massive and rapid development of new methods in the globe of
data mining, it is highly mandatory to scrutinize a vital analysis of
existing techniques and the future novelty. This paper shows the
comparative analysis of different cluster ensemble methods along
with their methodologies and salient features. Henceforth this
unambiguous analysis will be very useful for the society of clustering
experts and also helps in deciding the most appropriate one to resolve
the problem in hand.
Abstract: Vertical slotted walls can be used as permeable
breakwaters to provide economical and environmental protection
from undesirable waves and currents inside the port. The permeable
breakwaters are partially protection and have been suggested to
overcome the environmental disadvantages of fully protection
breakwaters. For regular waves a semi-analytical model is based on
an eigenfunction expansion method and utilizes a boundary condition
at the surface of each wall are developed to detect the energy
dissipation through the slots. Extensive laboratory tests are carried
out to validate the semi-analytic models. The structure of the physical
model contains two walls and it consists of impermeable upper and
lower part, where the draft is based a decimal multiple of the total
depth. The middle part is permeable with a porosity of 50%. The
second barrier is located at a distant of 0.5, 1, 1.5 and 2 times of the
water depth from the first one. A comparison of the theoretical results
with previous studies and experimental measurements of the present
study show a good agreement and that, the semi-analytical model is
able to adequately reproduce most the important features of the
experiment.