Abstract: It is a critical time to upgrade technology and increase value added with manufacturing skills developing and management strategies that will highly satisfy the customers need in the precision machinery global market. In recent years, the supply side, each precision machinery manufacturers in each country are facing the pressures of price reducing from the demand side voices that pushes the high-end precision machinery manufacturers adopts low-cost and high-quality strategy to retrieve the market. Because of the trend of the global market, the manufacturers must take price reducing strategies and upgrade technology of low-end machinery for differentiations to consolidate the market.By using six key success factors (KSFs), customer perceived value, customer satisfaction, customer service, product design, product effectiveness and machine structure quality are causal conditions to explore the impact of competitive advantage of the enterprise, such as overall profitability and product pricing power. This research uses key success paths (KSPs) approach and f/s QCA software to explore various combinations of causal relationships, so as to fully understand the performance level of KSFs and business objectives in order to achieve competitive advantage. In this study, the combination of a causal relationships, are called Key Success Paths (KSPs). The key success paths guide the enterprise to achieve the specific outcomes of business. The findings of this study indicate that there are thirteen KSPs to achieve the overall profitability, sixteen KSPs to achieve the product pricing power and seventeen KSPs to achieve both overall profitability and pricing power of the enterprise. The KSPs provide the directions of resources integration and allocation, improve utilization efficiency of limited resources to realize the continuous vision of the enterprise.
Abstract: Two multisensor system architectures for navigation
and guidance of small Unmanned Aircraft (UA) are presented and
compared. The main objective of our research is to design a compact,
light and relatively inexpensive system capable of providing the
required navigation performance in all phases of flight of small UA,
with a special focus on precision approach and landing, where Vision
Based Navigation (VBN) techniques can be fully exploited in a
multisensor integrated architecture. Various existing techniques for
VBN are compared and the Appearance-Based Navigation (ABN)
approach is selected for implementation. Feature extraction and
optical flow techniques are employed to estimate flight parameters
such as roll angle, pitch angle, deviation from the runway centreline
and body rates. Additionally, we address the possible synergies of
VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU
(Micro-Electromechanical System Inertial Measurement Unit)
sensors, and the use of Aircraft Dynamics Model (ADM) to provide
additional information suitable to compensate for the shortcomings of
VBN and MEMS-IMU sensors in high-dynamics attitude
determination tasks. An Extended Kalman Filter (EKF) is developed
to fuse the information provided by the different sensors and to
provide estimates of position, velocity and attitude of the UA
platform in real-time. The key mathematical models describing the
two architectures i.e., VBN-IMU-GNSS (VIG) system and VIGADM
(VIGA) system are introduced. The first architecture uses VBN
and GNSS to augment the MEMS-IMU. The second mode also
includes the ADM to provide augmentation of the attitude channel.
Simulation of these two modes is carried out and the performances of
the two schemes are compared in a small UA integration scheme (i.e.,
AEROSONDE UA platform) exploring a representative cross-section
of this UA operational flight envelope, including high dynamics
manoeuvres and CAT-I to CAT-III precision approach tasks.
Simulation of the first system architecture (i.e., VIG system) shows
that the integrated system can reach position, velocity and attitude
accuracies compatible with the Required Navigation Performance
(RNP) requirements. Simulation of the VIGA system also shows
promising results since the achieved attitude accuracy is higher using
the VBN-IMU-ADM than using VBN-IMU only. A comparison of
VIG and VIGA system is also performed and it shows that the
position and attitude accuracy of the proposed VIG and VIGA
systems are both compatible with the RNP specified in the various
UA flight phases, including precision approach down to CAT-II.
Abstract: This work aims to introduce an efficient and to standardize the measuring system analyses for automotive industrial. The study started by literature reviewing about the management and analyses measurement system. The approach of measuring system management, then, was constructed. Such approach was validated by collecting the current measuring system data using the equipments of interest including vernier caliper and micrometer. Their accuracy and precision of measurements were analyzed. Finally, the measuring system was improved and evaluated. The study showed that vernier did not meet its measuring characteristics based on the linearity whereas all equipments were lacking of the measuring precision characteristics. Consequently, the causes of measuring variation via the equipments of interest were declared. After the improvement, it was found that their measuring performance could be accepted as the standard required. Finally, the standardized approach for analyzing the measuring system of automotive was concluded.
Abstract: The Global Positioning System (GPS), satellite-based technology, has been utilized extensively in the last few years in a wide range of Geometrics and Geographic Information Systems’ (GIS) applications. One of the main challenges dealing with GPS-based heights consists of converting them into Mean Sea Level (MSL) heights, which is used in surveys and mapping.
In this research’s work, differences in heights of 50 points, in northern part of Libya has been carried out by using both ordinary leveling (in which Geoid is the reference datum) and GPS techniques (in which Ellipsoid is the reference datum). In addition, this study utilized the EGM2008 model to obtain the undulation values between the ellipsoidal and orthometric heights. From these values of ellipsoidal heights can be obtained from GPS observations to compute the orthomteric heights. This research presents a suitable alternative, from an economical point of view, to substitute the expensive traditional leveling technique, particularly, for topographic mapping.
Abstract: This paper presents MOSFET based analog to digital converter which is simple in design, has high resolution, and conversion rate better than dual slope ADC. It has no DAC which will limit the performance, no error in conversion, can operate for wide range of inputs and never become unstable. One of the industrial applications, where the proposed high resolution MOSFET ADC can be used is, for the positioning of control valves in a multi channel data acquisition and control system (DACS), using stepper motors as actuators of control valves. It is observed that in a DACS having ten control valves, 0.02% of positional accuracy of control valves can be achieved with the data update period of 250ms and with stepper motors of maximum pulse rate 20 Kpulses per sec. and minimum pulse width of 2.5 μsec. The reported accuracy so far by other authors is 0.2%, with update period of 255 ms and with 8 bit DAC. The accuracy in the proposed configuration is limited by the available precision stepper motor and not by the MOSFET based ADC.
Abstract: This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.
Abstract: This paper studied the CSS-based indoor localization system which is easy to implement, inexpensive to compose the systems, additionally CSS-based indoor localization system covers larger area than other system. However, this system has problem which is affected by reflected distance data. This problem in localization is caused by the multi-path effect. Error caused by multi-path is difficult to be corrected because the indoor environment cannot be described. In this paper, in order to solve the problem by multi-path, we have supplemented the localization system by using pattern matching method based on extended database. Thereby, this method improves precision of estimated. Also this method is verified by experiments in gymnasium. Database was constructed by 1m intervals, and 16 sample data were collected from random position inside the region of DB points. As a result, this paper shows higher accuracy than existing method through graph and table.
Abstract: A wide variety of observational methods have been developed to evaluate the ergonomic workloads in manufacturing. However, the precision and accuracy of these methods remain a subject of debate. The aims of this study were to develop biomechanical methods to evaluate ergonomic workloads and to compare them with observational methods.
Two observational methods, i.e. SCANIA Ergonomic Standard (SES) and Rapid Upper Limb Assessment (RULA), were used to assess ergonomic workloads at two simulated workstations. They included four tasks such as tightening & loosening, attachment of tubes and strapping as well as other actions. Sensors were also used to measure biomechanical data (Inclinometers, Accelerometers, and Goniometers).
Our findings showed that in assessment of some risk factors both RULA & SES were in agreement with the results of biomechanical methods. However, there was disagreement on neck and wrist postures. In conclusion, the biomechanical approach was more precise than observational methods, but some risk factors evaluated with observational methods were not measurable with the biomechanical techniques developed.
Abstract: This paper is based on the bridgeless single-phase Ac–Dc Power Factor Correction (PFC) converters with Fuzzy Logic Controller. High frequency isolated Cuk converters are used as a modular dc-dc converter in Discontinuous Conduction Mode (DCM) of operation of Power Factor Correction. The aim of this paper is to simplify the program complexity of the controller by reducing the number of fuzzy sets of the Membership Functions (MFs) and to improve the efficiency and to eliminate the power quality problems. The output of Fuzzy controller is compared with High frequency triangular wave to generate PWM gating signals of Cuk converter. The proposed topologies are designed to work in Discontinuous Conduction Mode (DCM) to achieve a unity power factor and low total harmonic distortion of the input current. The Fuzzy Logic Controller gives additional advantages such as accurate result, uncertainty and imprecision and automatic control circuitry. Performance comparisons between the proposed and conventional controllers and circuits are performed based on circuit simulations.
Abstract: We propose the numerical method defined by
xn+1 = xn − λ[f(xn − μh(xn))/]f'(xn) , n ∈ N,
and determine the control parameter λ and μ to converge cubically. In addition, we derive the asymptotic error constant. Applying this proposed scheme to various test functions, numerical results show a good agreement with the theory analyzed in this paper and are proven using Mathematica with its high-precision computability.
Abstract: Various studies have showed that about 90% of single line to ground faults occurred on High voltage transmission lines have transient nature. This type of faults is cleared by temporary outage (by the single phase auto-reclosure). The interval between opening and reclosing of the faulted phase circuit breakers is named “Dead Time” that is varying about several hundred milliseconds. For adjustment of traditional single phase auto-reclosures that usually are not intelligent, it is necessary to calculate the dead time in the off-line condition precisely. If the dead time used in adjustment of single phase auto-reclosure is less than the real dead time, the reclosing of circuit breakers threats the power systems seriously. So in this paper a novel approach for precise calculation of dead time in power transmission lines based on the network equivalencing in time domain is presented. This approach has extremely higher precision in comparison with the traditional method based on Thevenin equivalent circuit. For comparison between the proposed approach in this paper and the traditional method, a comprehensive simulation by EMTP-ATP is performed on an extensive power network.
Abstract: This investigation presents the formulation of kerf (width of slit) and optimal control parameter settings of wire electrochemical discharge machining which results minimum possible kerf while machining Al7075/SiCp MMCs. WEDM is proved its efficiency and effectiveness to cut the hard ceramic reinforced MMCs within the permissible budget. Among the distinct performance measures of WEDM process, kerf is an important performance characteristic which determines the dimensional accuracy of the machined component while producing high precision components. The lack of available of the machinability information such advanced MMCs result the more experimentation in the manufacturing industries. Therefore, extensive experimental investigations are essential to provide the database of effect of various control parameters on the kerf while machining such advanced MMCs in WEDM. Literature reviled the significance some of the electrical parameters which are prominent on kerf for machining distinct conventional materials. However, the significance of reinforced particulate size and volume fraction on kerf is highlighted in this work while machining MMCs along with the machining parameters of pulse-on time, pulse-off time and wire tension. Usually, the dimensional tolerances of machined components are decided at the design stage and a machinist pay attention to produce the required dimensional tolerances by setting appropriate machining control variables. However, it is highly difficult to determine the optimal machining settings for such advanced materials on the shop floor. Therefore, in the view of precision of cut, kerf (cutting width) is considered as the measure of performance for the model. It was found from the literature that, the machining conditions of higher fractions of large size SiCp resulting less kerf where as high values of pulse-on time result in a high kerf. A response surface model is used to predict the relative significance of various control variables on kerf. Consequently, a powerful artificial intelligence called genetic algorithms (GA) is used to determine the best combination of the control variable settings. In the next step the conformation test was conducted for the optimal parameter settings and found good agreement between the GA kerf and measured kerf. Hence, it is clearly reveal that the effectiveness and accuracy of the developed model and program to analyze the kerf and to determine its optimal process parameters. The results obtained in this work states that, the resulted optimized parameters are capable of machining the Al7075/SiCp MMCs more efficiently and with better dimensional accuracy.
Abstract: Reformulating the user query is a technique that aims to improve the performance of an Information Retrieval System (IRS) in terms of precision and recall. This paper tries to evaluate the technique of query reformulation guided by an external resource for Arabic texts. To do this, various precision and recall measures were conducted and two corpora with different external resources like Arabic WordNet (AWN) and the Arabic Dictionary (thesaurus) of Meaning (ADM) were used. Examination of the obtained results will allow us to measure the real contribution of this reformulation technique in improving the IRS performance.
Abstract: Nowadays, the mathematical/statistical applications
are developed with more complexity and accuracy. However, these
precisions and complexities have brought as result that applications
need more computational power in order to be executed faster. In this
sense, the multicore environments are playing an important role to
improve and to optimize the execution time of these applications.
These environments allow us the inclusion of more parallelism inside
the node. However, to take advantage of this parallelism is not an
easy task, because we have to deal with some problems such as: cores
communications, data locality, memory sizes (cache and RAM),
synchronizations, data dependencies on the model, etc. These issues
are becoming more important when we wish to improve the
application’s performance and scalability. Hence, this paper describes
an optimization method developed for Systemic Model of Banking
Originated Losses (SYMBOL) tool developed by the European
Commission, which is based on analyzing the application's weakness
in order to exploit the advantages of the multicore. All these
improvements are done in an automatic and transparent manner with
the aim of improving the performance metrics of our tool. Finally,
experimental evaluations show the effectiveness of our new
optimized version, in which we have achieved a considerable
improvement on the execution time. The time has been reduced
around 96% for the best case tested, between the original serial
version and the automatic parallel version.
Abstract: A new basis function neural network algorithm is proposed for numerical integration. The main idea is to construct neural network model based on spline basis functions, which is used to approximate the integrand by training neural network weights. The convergence theorem of the neural network algorithm, the theorem for numerical integration and one corollary are presented and proved. The numerical examples, compared with other methods, show that the algorithm is effective and has the characteristics such as high precision and the integrand not required known. Thus, the algorithm presented in this paper can be widely applied in many engineering fields.
Abstract: In this paper, we present a comparative study of the
genetic algorithms and Hessian-s methods for optimal research of the
active powers in an electric network of power. The objective function
which is the performance index of production of electrical energy is
minimized by satisfying the constraints of the equality type and
inequality type initially by the Hessian-s methods and in the second
time by the genetic Algorithms. The results found by the application
of AG for the minimization of the electric production costs of power
are very encouraging. The algorithms seem to be an effective
technique to solve a great number of problems and which are in
constant evolution. Nevertheless it should be specified that the
traditional binary representation used for the genetic algorithms
creates problems of optimization of management of the large-sized
networks with high numerical precision.
Abstract: In this paper, the detection and tracking of face, mouth, hands and medication bottles in the context of medication intake monitoring with a camera is presented. This is aimed at recognizing medication intake for elderly in their home setting to avoid an inappropriate use. Background subtraction is used to isolate moving objects, and then, skin and bottle segmentations are done in the RGB normalized color space. We use a minimum displacement distance criterion to track skin color regions and the R/G ratio to detect the mouth. The color-labeled medication bottles are simply tracked based on the color space distance to their mean color vector. For the recognition of medication intake, we propose a three-level hierarchal approach, which uses activity-patterns to recognize the normal medication intake activity. The proposed method was tested with three persons, with different medication intake scenarios, and gave an overall precision of over 98%.
Abstract: Creative design requires new approaches to assessment
in vocational and technological education. To date, there has been little
discussion on instruments used to evaluate dies produced by students
in vocational and technological education. Developing a generic
instrument has been very difficult due to the diversity of creative
domains, the specificity of content, and the subjectivity involved in
judgment. This paper presents an instrument for measuring the
creativity in the design of products by expanding the Consensual
Assessment Technique (CAT). The content-based scale was evaluated
for content validity by 5 experts. The scale comprises 5 criteria:
originality; practicability; precision; aesthetics; and exchangeability.
Nine experts were invited to evaluate the dies produced by 38 college
students who enrolled in a Product Design and Development course.
To further explore the degree of rater agreement, inter-rater reliability
was calculated for each dimension using Kendall's coefficient of
concordance test. The inter-judge reliability scores achieved
significance, with coefficients ranging from 0.53 to 0.71.
Abstract: Software Reusability is primary attribute of software
quality. There are metrics for identifying the quality of reusable
components but the function that makes use of these metrics to find
reusability of software components is still not clear. These metrics if
identified in the design phase or even in the coding phase can help us
to reduce the rework by improving quality of reuse of the component
and hence improve the productivity due to probabilistic increase in
the reuse level. In this paper, we have devised the framework of
metrics that uses McCabe-s Cyclometric Complexity Measure for
Complexity measurement, Regularity Metric, Halstead Software
Science Indicator for Volume indication, Reuse Frequency metric
and Coupling Metric values of the software component as input
attributes and calculated reusability of the software component. Here,
comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA
approaches is performed to evaluate the reusability of software
components and Fuzzy-GA results outperform the other used
approaches. The developed reusability model has produced high
precision results as expected by the human experts.
Abstract: We present a hardware oriented method for real-time
measurements of object-s position in video. The targeted application
area is light spots used as references for robotic navigation. Different
algorithms for dynamic thresholding are explored in combination
with component labeling and Center Of Gravity (COG) for highest
possible precision versus Signal-to-Noise Ratio (SNR). This method
was developed with a low hardware cost in focus having only one
convolution operation required for preprocessing of data.