Abstract: In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the surface hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor hobson talysurf tester, micro vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.
Abstract: In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.
Abstract: Nowadays, cloud environments are becoming a need for companies, this new technology gives the opportunities to access to the data anywhere and anytime. It also provides an optimized and secured access to the resources and gives more security for the data which is stored in the platform. However, some companies do not trust Cloud providers, they think that providers can access and modify some confidential data such as bank accounts. Many works have been done in this context, they conclude that encryption methods realized by providers ensure the confidentiality, but, they forgot that Cloud providers can decrypt the confidential resources. The best solution here is to apply some operations on the data before sending them to the provider Cloud in the objective to make them unreadable. The principal idea is to allow user how it can protect his data with his own methods. In this paper, we are going to demonstrate our approach and prove that is more efficient in term of execution time than some existing methods. This work aims at enhancing the quality of service of providers and ensuring the trust of the customers.
Abstract: This paper reviews the model-based qualitative and
quantitative Operations Management research in the context of
Construction Supply Chain Management (CSCM). Construction
industry has been traditionally blamed for low productivity, cost and
time overruns, waste, high fragmentation and adversarial
relationships. The construction industry has been slower than other
industries to employ the Supply Chain Management (SCM) concept
and develop models that support the decision-making and planning.
However the last decade there is a distinct shift from a project-based
to a supply-based approach of construction management. CSCM
comes up as a new promising management tool of construction
operations and improves the performance of construction projects in
terms of cost, time and quality. Modeling the Construction Supply
Chain (CSC) offers the means to reap the benefits of SCM, make
informed decisions and gain competitive advantage. Different
modeling approaches and methodologies have been applied in the
multi-disciplinary and heterogeneous research field of CSCM. The
literature review reveals that a considerable percentage of the CSC
modeling research accommodates conceptual or process models
which present general management frameworks and do not relate to
acknowledged soft Operations Research methods. We particularly
focus on the model-based quantitative research and categorize the
CSCM models depending on their scope, objectives, modeling
approach, solution methods and software used. Although over the last
few years there has been clearly an increase of research papers on
quantitative CSC models, we identify that the relevant literature is
very fragmented with limited applications of simulation,
mathematical programming and simulation-based optimization. Most
applications are project-specific or study only parts of the supply
system. Thus, some complex interdependencies within construction
are neglected and the implementation of the integrated supply chain
management is hindered. We conclude this paper by giving future
research directions and emphasizing the need to develop optimization
models for integrated CSCM. We stress that CSC modeling needs a
multi-dimensional, system-wide and long-term perspective. Finally,
prior applications of SCM to other industries have to be taken into
account in order to model CSCs, but not without translating the
generic concepts to the context of construction industry.
Abstract: Supply chains are the backbone of trade and
commerce. Their logistics use different transport corridors on regular
basis for operational purpose. The international supply chain
transport corridors include different infrastructure elements (e.g.
weighbridge, package handling equipments, border clearance
authorities, and so on). This paper presents the use of multi-agent
systems (MAS) to model and simulate some aspects of transportation
corridors, and in particular the area of weighbridge resource
optimization for operational profit. An underlying multi-agent model
provides a means of modeling the relationships among stakeholders
in order to enable coordination in a transport corridor environment.
Simulations of the costs of container unloading, reloading, and
waiting time for queuing up tracks have been carried out using data
sets. Results of the simulation provide the potential guidance in
making decisions about optimal service resource allocation in a trade
corridor.
Abstract: This paper focuses on a critical component of the
situational awareness (SA), the control of autonomous vertical flight
for vectored thrust aerial vehicle (VTAV). With the SA strategy, we
proposed a neural network motion control procedure to address the
dynamics variation and performance requirement difference of flight
trajectory for a VTAV. This control strategy with using of NARMAL2
neurocontroller for chosen model of VTAV has been verified by
simulation of take-off and forward maneuvers using software
package Simulink and demonstrated good performance for fast
stabilization of motors, consequently, fast SA with economy in
energy can be asserted during search-and-rescue operations.
Abstract: Ontology validation is an important part of web
applications’ development, where knowledge integration and
ontological reasoning play a fundamental role. It aims to ensure the
consistency and correctness of ontological knowledge and to
guarantee that ontological reasoning is carried out in a meaningful
way. Existing approaches to ontology validation address more or less
specific validation issues, but the overall process of validating web
ontologies has not been formally established yet. As the size and the
number of web ontologies continue to grow, more web applications’
developers will rely on the existing repository of ontologies rather
than develop ontologies from scratch. If an application utilizes
multiple independently created ontologies, their consistency must be
validated and eventually adjusted to ensure proper interoperability
between them. This paper presents a validation technique intended to
test the consistency of independent ontologies utilized by a common
application.
Abstract: This research aims to develop an algorithm to
generate a schedule of multiple cranes that will maximize load
throughputs in anodizing operation. The algorithm proposed utilizes
an enumerative strategy to search for constant time between
successive loads and crane covering range over baths. The computer
program developed is able to generate a near-optimal crane schedule
within reasonable times, i.e. within 10 minutes. Its results are
compared with existing solutions from an aluminum extrusion
industry. The program can be used to generate crane schedules for
mixed products, thus allowing mixed-model line balancing to
improve overall cycle times.
Abstract: Experts, enterprises and operators expect that the
bandwidth request will increase up to rates of 100 to 1,000 Mbps
within several years. Therefore the most important question is which
technology shall satisfy the future consumer broadband demands.
Currently the consensus is, that the fiber technology has the best
technical characteristics to achieve such the high bandwidth rates.
But fiber technology is so far very cost-intensive and resource
consuming. To avoid these investments, operators are concentrating
to upgrade the existing copper and hybrid fiber coax infrastructures.
This work presents a comparison of the copper and fiber
technologies including an overview about the current German
broadband market. Both technologies are reviewed in the terms of
demand, willingness to pay and economic efficiency in connection
with the technical characteristics.
Abstract: The aim of this investigation is to elaborate nearinfrared
methods for testing and recognition of chemical components
and quality in “Pannon wheat” allied (i.e. true to variety or variety
identified) milling fractions as well as to develop spectroscopic
methods following the milling processes and evaluate the stability of
the milling technology by different types of milling products and
according to sampling times, respectively. These wheat categories
produced under industrial conditions where samples were collected
versus sampling time and maximum or minimum yields. The changes
of the main chemical components (such as starch, protein, lipid) and
physical properties of fractions (particle size) were analysed by
dispersive spectrophotometers using visible (VIS) and near-infrared
(NIR) regions of the electromagnetic radiation. Close correlation
were obtained between the data of spectroscopic measurement
techniques processed by various chemometric methods (e.g. principal
component analysis [PCA], cluster analysis [CA]) and operation
condition of milling technology. It is obvious that NIR methods are
able to detect the deviation of the yield parameters and differences of
the sampling times by a wide variety of fractions, respectively. NIR
technology can be used in the sensitive monitoring of milling
technology.
Abstract: The venture capital becomes more and more advanced
and effective source of the innovation project financing, connected
with a high-risk level. In the developed countries, it plays a key role
in transforming innovation projects into successful businesses and
creating the prosperity of the modern economy. In Russia, there are
many necessary preconditions for creation of the effective venture
investment system: the network of the public institutes for innovation
financing operates; there is a significant number of the small and
medium-sized enterprises, capable to sell production with good
market potential. However, the current system does not confirm the
necessary level of efficiency in practice that can be substantially
explained by the absence of the accurate plan of action to form the
national venture model and by the lack of experience of successful
venture deals with profitable exits in Russian economy. This paper
studies the influence of various factors on the venture industry
development by the example of the IT-sector in Russia. The choice of
the sector is based on the fact, that this segment is the main driver of
the venture capital market growth in Russia, and the necessary set of
data exists. The size of investment of the second round is used as the
dependent variable. To analyse the influence of the previous round,
such determinant as the volume of the previous (first) round
investments is used. There is also used a dummy variable in
regression to examine that the participation of an investor with high
reputation and experience in the previous round can influence the size
of the next investment round. The regression analysis of short-term
interrelations between studied variables reveals prevailing influence
of the volume of the first round investments on the venture
investments volume of the second round. The most important
determinant of the value of the second-round investment is the value
of first–round investment, so it means that the most competitive on
the Russian market are the start-up teams that can attract more money
on the start, and the target market growth is not the factor of crucial
importance. This supports the point of view that VC in Russia is
driven by endogenous factors and not by exogenous ones that are
based on global market growth.
Abstract: In this study, we demonstrate the production of natural gas hydrates from permeable marine sediments with simultaneous mechanisms for methane recovery and methane-air or methane-air/carbon dioxide replacement. The simultaneous melting happens until the chemical potentials become equal in both phases as natural gas hydrate depletion continues and self-regulated methane-air replacement occurs over an arbitrary point. We observed certain point between dissociation and replacement mechanisms in the natural gas hydrate reservoir, and we call this boundary as critical methane concentration. By the way, when carbon dioxide was added, the process of chemical exchange of methane by air/carbon dioxide was observed in the natural gas hydrate. The suggested process will operate well for most global natural gas hydrate reservoirs, regardless of the operating conditions or geometrical constraints.
Abstract: Rapidly changing factors that affect daily life also affect operational environment and the way military leaders fulfill their missions. With the help of technological developments, traditional linearity of conflict and war has started to fade away. Furthermore, mission domain has broadened to include traditional threats, hybrid threats and new challenges of cyber and space. Considering the future operational environment, future military leaders need to adapt themselves to the new challenges of the future battlefield. But how to decide what kind of features of leadership are required to operate and accomplish mission in the new complex battlefield? In this article, the main aim is to provide answers to this question. To be able to find right answers, first leadership and leadership components are defined, and then characteristics of future operational environment are analyzed. Finally, leadership features that are required to be successful in redefined battlefield are explained.
Abstract: To explore how the brain may recognise objects in its
general,accurate and energy-efficient manner, this paper proposes the
use of a neuromorphic hardware system formed from a Dynamic
Video Sensor (DVS) silicon retina in concert with the SpiNNaker
real-time Spiking Neural Network (SNN) simulator. As a first step
in the exploration on this platform a recognition system for dynamic
hand postures is developed, enabling the study of the methods used
in the visual pathways of the brain. Inspired by the behaviours of
the primary visual cortex, Convolutional Neural Networks (CNNs)
are modelled using both linear perceptrons and spiking Leaky
Integrate-and-Fire (LIF) neurons.
In this study’s largest configuration using these approaches, a
network of 74,210 neurons and 15,216,512 synapses is created and
operated in real-time using 290 SpiNNaker processor cores in parallel
and with 93.0% accuracy. A smaller network using only 1/10th of the
resources is also created, again operating in real-time, and it is able
to recognise the postures with an accuracy of around 86.4% - only
6.6% lower than the much larger system. The recognition rate of the
smaller network developed on this neuromorphic system is sufficient
for a successful hand posture recognition system, and demonstrates
a much improved cost to performance trade-off in its approach.
Abstract: Hybrid electric vehicles can reduce pollution and
improve fuel economy. Power-split hybrid electric vehicles (HEVs)
provide two power paths between the internal combustion engine
(ICE) and energy storage system (ESS) through the gears of an
electrically variable transmission (EVT). EVT allows ICE to operate
independently from vehicle speed all the time. Therefore, the ICE can
operate in the efficient region of its characteristic brake specific fuel
consumption (BSFC) map. The two-mode powertrain can operate in
input-split or compound-split EVT modes and in four different fixed
gear configurations. Power-split architecture is advantageous because
it combines conventional series and parallel power paths. This
research focuses on input-split and compound-split modes in the
two-mode power-split powertrain. Fuzzy Logic Control (FLC) for an
internal combustion engine (ICE) and PI control for electric machines
(EMs) are derived for the urban driving cycle simulation. These
control algorithms reduce vehicle fuel consumption and improve ICE
efficiency while maintaining the state of charge (SOC) of the energy
storage system in an efficient range.
Abstract: Exploration and exploitation capabilities are both
important within Operations as means for improvement when
managed separately, and for establishing dynamic improvement
capabilities when combined in balance. However, it is unclear what
exploration and exploitation capabilities imply in improvement and
development work within an Operations context. So, in order to
better understand how to develop exploration and exploitation
capabilities within Operations, the main characteristics of these
constructs needs to be identified and further understood. Thus, the
objective of this research is to increase the understanding about
exploitation and exploration characteristics, to concretize what they
translates to within the context of improvement and development
work in an Operations unit, and to identify practical challenges. A
literature review and a case study are presented. In the literature
review, different interpretations of exploration and exploitation are
portrayed, key characteristics have been identified, and a deepened
understanding of exploration and exploitation characteristics is
described. The case in the study is an Operations unit, and the aim is
to explore to what extent and in what ways exploration and
exploitation activities are part of the improvement structures and
processes. The contribution includes an identification of key
characteristics of exploitation and exploration, as well as an
interpretation of the constructs. Further, some practical challenges are
identified. For instance, exploration activities tend to be given low
priority, both in daily work as in the manufacturing strategy. Also,
the overall understanding about the concepts of exploitation and
exploration (or any similar aspect of dynamic improvement
capabilities) is very low.
Abstract: The paper develops a Non-Linear Model Predictive
Control (NMPC) of water quality in Drinking Water Distribution
Systems (DWDS) based on the advanced non-linear quality dynamics
model including disinfections by-products (DBPs). A special attention
is paid to the analysis of an impact of the flow trajectories prescribed
by an upper control level of the recently developed two-time scale
architecture of an integrated quality and quantity control in DWDS.
The new quality controller is to operate within this architecture in the
fast time scale as the lower level quality controller. The controller
performance is validated by a comprehensive simulation study based
on an example case study DWDS.
Abstract: In this study, the pedestrian simulation VISWALK
integration and application platform ant algorithms written program
made to construct a renovation engineering schedule planning mode.
The use of simulation analysis platform construction site when the user
running the simulation, after calculating the user walks in the case of
construction delays, the ant algorithm to find out the minimum delay
time schedule plan, and add volume and unit area deactivated loss of
business computing, and finally to the owners and users of two
different positions cut considerations pick out the best schedule
planning. To assess and validate its effectiveness, this study
constructed the model imported floor of a shopping mall floor
renovation engineering cases. Verify that the case can be found from
the mode of the proposed project schedule planning program can
effectively reduce the delay time and the user's walking mall loss of
business, the impact of the operation on the renovation engineering
facilities in the building to a minimum.
Abstract: A novel design technique employing CMOS Current
Feedback Operational Amplifier (CFOA) is presented. The feature of
consumption very low power in designing pseudo-OTA is used to
decreasing the total power consumption of the proposed CFOA. This
design approach applies pseudo-OTA as input stage cascaded with
buffer stage. Moreover, the DC input offset voltage and harmonic
distortion (HD) of the proposed CFOA are very low values compared
with the conventional CMOS CFOA due to the symmetrical input
stage. P-Spice simulation results are obtained using 0.18μm MIETEC
CMOS process parameters and supply voltage of ±1.2V, 50μA
biasing current. The p-spice simulation shows excellent improvement
of the proposed CFOA over existing CMOS CFOA. Some of these
performance parameters, for example, are DC gain of 62. dB, openloop
gain bandwidth product of 108 MHz, slew rate (SR+) of
+71.2V/μS, THD of -63dB and DC consumption power (PC) of
2mW.
Abstract: This paper discusses the forensic investigation of a
fatality-involved catastrophic structure collapse and the special
challenges faced when tasked with directing such an effort. While
this paper discusses the investigation’s findings and the outcome of
the event; this paper’s primary focus is on the challenges faced
directing a forensic investigation that requires coordinating with
governmental oversight while also having to accommodate multiple
parties’ investigative teams. In particular the challenges discussed
within this paper included maintaining on-site safety and operations
while accommodating outside investigator’s interests. In addition this
paper discusses unique challenges that one may face such as what to
do about unethical conduct of interested party’s investigative teams,
“off the record” sharing of information, and clandestinely transmitted
evidence.