Application of l1-Norm Minimization Technique to Image Retrieval

Image retrieval is a topic where scientific interest is currently high. The important steps associated with image retrieval system are the extraction of discriminative features and a feasible similarity metric for retrieving the database images that are similar in content with the search image. Gabor filtering is a widely adopted technique for feature extraction from the texture images. The recently proposed sparsity promoting l1-norm minimization technique finds the sparsest solution of an under-determined system of linear equations. In the present paper, the l1-norm minimization technique as a similarity metric is used in image retrieval. It is demonstrated through simulation results that the l1-norm minimization technique provides a promising alternative to existing similarity metrics. In particular, the cases where the l1-norm minimization technique works better than the Euclidean distance metric are singled out.

Artificial Visual Percepts for Image Understanding

Visual inputs are one of the key sources from which humans perceive the environment and 'understand' what is happening. Artificial systems perceive the visual inputs as digital images. The images need to be processed and analysed. Within the human brain, processing of visual inputs and subsequent development of perception is one of its major functionalities. In this paper we present part of our research project, which aims at the development of an artificial model for visual perception (or 'understanding') based on the human perceptive and cognitive systems. We propose a new model for perception from visual inputs and a way of understaning or interpreting images using the model. We demonstrate the implementation and use of the model with a real image data set.

Recovering the Clipped OFDM Figurebased on the Conic Function

In Orthogonal Frequency Division Multiplexing (OFDM) systems, the peak to average power ratio (PAR) is much high. The clipping signal scheme is a useful method to reduce PAR. Clipping the OFDM signal, however, increases the overall noise level by introducing clipping noise. It is necessary to recover the figure of the original signal at receiver in order to reduce the clipping noise. Considering the continuity of the signal and the figure of the peak, we obtain a certain conic function curve to replace the clipped signal module within the clipping time. The results of simulation show that the proposed scheme can reduce the systems? BER (bit-error rate) 10 times when signal-to-interference-and noise-ratio (SINR) equals to 12dB. And the BER performance of the proposed scheme is superior to that of kim's scheme, too.

On Symmetry Analysis and Exact Wave Solutions of New Modified Novikov Equation

In this paper, we study a new modified Novikov equation for its classical and nonclassical symmetries and use the symmetries to reduce it to a nonlinear ordinary differential equation (ODE). With the aid of solutions of the nonlinear ODE by using the modified (G/G)-expansion method proposed recently, multiple exact traveling wave solutions are obtained and the traveling wave solutions are expressed by the hyperbolic functions, trigonometric functions and rational functions.

Face Recognition using a Kernelization of Graph Embedding

Linearization of graph embedding has been emerged as an effective dimensionality reduction technique in pattern recognition. However, it may not be optimal for nonlinearly distributed real world data, such as face, due to its linear nature. So, a kernelization of graph embedding is proposed as a dimensionality reduction technique in face recognition. In order to further boost the recognition capability of the proposed technique, the Fisher-s criterion is opted in the objective function for better data discrimination. The proposed technique is able to characterize the underlying intra-class structure as well as the inter-class separability. Experimental results on FRGC database validate the effectiveness of the proposed technique as a feature descriptor.

Group Invariant Solutions for Radial Jet Having Finite Fluid Velocity at Orifice

The group invariant solution for Prandtl-s boundary layer equations for an incompressible fluid governing the flow in radial free, wall and liquid jets having finite fluid velocity at the orifice are investigated. For each jet a symmetry is associated with the conserved vector that was used to derive the conserved quantity for the jet elsewhere. This symmetry is then used to construct the group invariant solution for the third-order partial differential equation for the stream function. The general form of the group invariant solution for radial jet flows is derived. The general form of group invariant solution and the general form of the similarity solution which was obtained elsewhere are the same.

Analysis of Combustion, Performance and Emission Characteristics of Turbocharged LHR Extended Expansion DI Diesel Engine

The fundamental aim of extended expansion concept is to achieve higher work done which in turn leads to higher thermal efficiency. This concept is compatible with the application of turbocharger and LHR engine. The Low Heat Rejection engine was developed by coating the piston crown, cylinder head inside with valves and cylinder liner with partially stabilized zirconia coating of 0.5 mm thickness. Extended expansion in diesel engines is termed as Miller cycle in which the expansion ratio is increased by reducing the compression ratio by modifying the inlet cam for late inlet valve closing. The specific fuel consumption reduces to an appreciable level and the thermal efficiency of the extended expansion turbocharged LHR engine is improved. In this work, a thermodynamic model was formulated and developed to simulate the LHR based extended expansion turbocharged direct injection diesel engine. It includes a gas flow model, a heat transfer model, and a two zone combustion model. Gas exchange model is modified by incorporating the Miller cycle, by delaying inlet valve closing timing which had resulted in considerable improvement in thermal efficiency of turbocharged LHR engines. The heat transfer model, calculates the convective and radiative heat transfer between the gas and wall by taking into account of the combustion chamber surface temperature swings. Using the two-zone combustion model, the combustion parameters and the chemical equilibrium compositions were determined. The chemical equilibrium compositions were used to calculate the Nitric oxide formation rate by assuming a modified Zeldovich mechanism. The accuracy of this model is scrutinized against actual test results from the engine. The factors which affect thermal efficiency and exhaust emissions were deduced and their influences were discussed. In the final analysis it is seen that there is an excellent agreement in all of these evaluations.

Roll of Membership functions in Fuzzy Logic for Prediction of Shoot Length of Mustard Plant Based on Residual Analysis

The selection for plantation of a particular type of mustard plant depending on its productivity (pod yield) at the stage of maturity. The growth of mustard plant dependent on some parameters of that plant, these are shoot length, number of leaves, number of roots and roots length etc. As the plant is growing, some leaves may be fall down and some new leaves may come, so it can not gives the idea to develop the relationship with the seeds weight at mature stage of that plant. It is not possible to find the number of roots and root length of mustard plant at growing stage that will be harmful of this plant as roots goes deeper to deeper inside the land. Only the value of shoot length which increases in course of time can be measured at different time instances. Weather parameters are maximum and minimum humidity, rain fall, maximum and minimum temperature may effect the growth of the plant. The parameters of pollution, water, soil, distance and crop management may be dominant factors of growth of plant and its productivity. Considering all parameters, the growth of the plant is very uncertain, fuzzy environment can be considered for the prediction of shoot length at maturity of the plant. Fuzzification plays a greater role for fuzzification of data, which is based on certain membership functions. Here an effort has been made to fuzzify the original data based on gaussian function, triangular function, s-function, Trapezoidal and L –function. After that all fuzzified data are defuzzified to get normal form. Finally the error analysis (calculation of forecasting error and average error) indicates the membership function appropriate for fuzzification of data and use to predict the shoot length at maturity. The result is also verified using residual (Absolute Residual, Maximum of Absolute Residual, Mean Absolute Residual, Mean of Mean Absolute Residual, Median of Absolute Residual and Standard Deviation) analysis.

Comparison of Compression Ability Using DCT and Fractal Technique on Different Imaging Modalities

Image compression is one of the most important applications Digital Image Processing. Advanced medical imaging requires storage of large quantities of digitized clinical data. Due to the constrained bandwidth and storage capacity, however, a medical image must be compressed before transmission and storage. There are two types of compression methods, lossless and lossy. In Lossless compression method the original image is retrieved without any distortion. In lossy compression method, the reconstructed images contain some distortion. Direct Cosine Transform (DCT) and Fractal Image Compression (FIC) are types of lossy compression methods. This work shows that lossy compression methods can be chosen for medical image compression without significant degradation of the image quality. In this work DCT and Fractal Compression using Partitioned Iterated Function Systems (PIFS) are applied on different modalities of images like CT Scan, Ultrasound, Angiogram, X-ray and mammogram. Approximately 20 images are considered in each modality and the average values of compression ratio and Peak Signal to Noise Ratio (PSNR) are computed and studied. The quality of the reconstructed image is arrived by the PSNR values. Based on the results it can be concluded that the DCT has higher PSNR values and FIC has higher compression ratio. Hence in medical image compression, DCT can be used wherever picture quality is preferred and FIC is used wherever compression of images for storage and transmission is the priority, without loosing picture quality diagnostically.

Thermal Stability Boundary of FG Panel under Aerodynamic Load

In this study, it is investigated the stability boundary of Functionally Graded (FG) panel under the heats and supersonic airflows. Material properties are assumed to be temperature dependent, and a simple power law distribution is taken. First-order shear deformation theory (FSDT) of plate is applied to model the panel, and the von-Karman strain- displacement relations are adopted to consider the geometric nonlinearity due to large deformation. Further, the first-order piston theory is used to model the supersonic aerodynamic load acting on a panel and Rayleigh damping coefficient is used to present the structural damping. In order to find a critical value of the speed, linear flutter analysis of FG panels is performed. Numerical results are compared with the previous works, and present results for the temperature dependent material are discussed in detail for stability boundary of the panel with various volume fractions, and aerodynamic pressures.

Design and Simulation of Electromagnetic Flow Meter for Circular Pipe Type

Electromagnetic flow meter by measuring the varying of magnetic flux, which is related to the velocity of conductive flow, can measure the rate of fluids very carefully and precisely. Electromagnetic flow meter operation is based on famous Faraday's second Law. In these equipments, the constant magnetostatic field is produced by electromagnet (winding around the tube) outside of pipe and inducting voltage that is due to conductive liquid flow is measured by electrodes located on two end side of the pipe wall. In this research, we consider to 2-dimensional mathematical model that can be solved by numerical finite difference (FD) solution approach to calculate induction potential between electrodes. The fundamental concept to design the electromagnetic flow meter, exciting winding and simulations are come out by using MATLAB and PDE-Tool software. In the last stage, simulations results will be shown for improvement and accuracy of technical provision.

An Augmented Automatic Choosing Control with Constrained Input Using Weighted Gradient Optimization Automatic Choosing Functions

In this paper we consider a nonlinear feedback control called augmented automatic choosing control (AACC) for nonlinear systems with constrained input using weighted gradient optimization automatic choosing functions. Constant term which arises from linearization of a given nonlinear system is treated as a coefficient of a stable zero dynamics. Parameters of the control are suboptimally selected by maximizing the stable region in the sense of Lyapunov with the aid of a genetic algorithm. This approach is applied to a field excitation control problem of power system to demonstrate the splendidness of the AACC. Simulation results show that the new controller can improve performance remarkably well.

Simulated Annealing Application for Structural Optimization

Several methods are available for weight and shape optimization of structures, among which Evolutionary Structural Optimization (ESO) is one of the most widely used methods. In ESO, however, the optimization criterion is completely case-dependent. Moreover, only the improving solutions are accepted during the search. In this paper a Simulated Annealing (SA) algorithm is used for structural optimization problem. This algorithm differs from other random search methods by accepting non-improving solutions. The implementation of SA algorithm is done through reducing the number of finite element analyses (function evaluations). Computational results show that SA can efficiently and effectively solve such optimization problems within short search time.

Enhanced Frame-based Video Coding to Support Content-based Functionalities

This paper presents the enhanced frame-based video coding scheme. The input source video to the enhanced frame-based video encoder consists of a rectangular-size video and shapes of arbitrarily-shaped objects on video frames. The rectangular frame texture is encoded by the conventional frame-based coding technique and the video object-s shape is encoded using the contour-based vertex coding. It is possible to achieve several useful content-based functionalities by utilizing the shape information in the bitstream at the cost of a very small overhead to the bitrate.

Measuring Teachers- Beliefs about Mathematics: A Fuzzy Set Approach

This paper deals with the application of a fuzzy set in measuring teachers- beliefs about mathematics. The vagueness of beliefs was transformed into standard mathematical values using a fuzzy preferences model. The study employed a fuzzy approach questionnaire which consists of six attributes for measuring mathematics teachers- beliefs about mathematics. The fuzzy conjoint analysis approach based on fuzzy set theory was used to analyze the data from twenty three mathematics teachers from four secondary schools in Terengganu, Malaysia. Teachers- beliefs were recorded in form of degrees of similarity and its levels of agreement. The attribute 'Drills and practice is one of the best ways of learning mathematics' scored the highest degree of similarity at 0. 79860 with level of 'strongly agree'. The results showed that the teachers- beliefs about mathematics were varied. This is shown by different levels of agreement and degrees of similarity of the measured attributes.

Characterization of Indoor Power Lines as Data Communication Channels Experimental Details and Results

In this paper, a multi-branch power line is modeled using ABCD matrix to show its worth as a communication channel. The model is simulated using MATLAB in an effort to investigate the effects of multiple loading, multipath, and those as a result of load mismatching. The channel transfer function is obtained and investigated using different cable lengths, and different number of bridge taps under given loading conditions.

Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model

Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).

Removal of Elemental Mercury from Dry Methane Gas with Manganese Oxides

In this study, we sought to investigate the mercury removal efficiency of manganese oxides from natural gas. The fundamental studies on mercury removal with manganese oxides sorbents were carried out in a laboratory scale fixed bed reactor at 30 °C with a mixture of methane (20%) and nitrogen gas laden with 4.8 ppb of elemental mercury. Manganese oxides with varying surface area and crystalline phase were prepared by conventional precipitation method in this study. The effects of surface area, crystallinity and other metal oxides on mercury removal efficiency were investigated. Effect of Ag impregnation on mercury removal efficiency was also investigated. Ag supported on metal oxide such titania and zirconia as reference materials were also used in this study for comparison. The characteristics of mercury removal reaction with manganese oxide was investigated using a temperature programmed desorption (TPD) technique. Manganese oxides showed very high Hg removal activity (about 73-93% Hg removal) for first time use. Surface area of the manganese oxide samples decreased after heat-treatment and resulted in complete loss of Hg removal ability for repeated use after Hg desorption in the case of amorphous MnO2, and 75% loss of the initial Hg removal activity for the crystalline MnO2. Mercury desorption efficiency of crystalline MnO2 was very low (37%) for first time use and high (98%) after second time use. Residual potassium content in MnO2 may have some effect on the thermal stability of the adsorbed Hg species. Desorption of Hg from manganese oxides occurs at much higher temperatures (with a peak at 400 °C) than Ag/TiO2 or Ag/ZrO2. Mercury may be captured on manganese oxides in the form of mercury manganese oxide.

An Efficient Algorithm for Delay Delay-variation Bounded Least Cost Multicast Routing

Many multimedia communication applications require a source to transmit messages to multiple destinations subject to quality of service (QoS) delay constraint. To support delay constrained multicast communications, computer networks need to guarantee an upper bound end-to-end delay from the source node to each of the destination nodes. This is known as multicast delay problem. On the other hand, if the same message fails to arrive at each destination node at the same time, there may arise inconsistency and unfairness problem among users. This is related to multicast delayvariation problem. The problem to find a minimum cost multicast tree with delay and delay-variation constraints has been proven to be NP-Complete. In this paper, we propose an efficient heuristic algorithm, namely, Economic Delay and Delay-Variation Bounded Multicast (EDVBM) algorithm, based on a novel heuristic function, to construct an economic delay and delay-variation bounded multicast tree. A noteworthy feature of this algorithm is that it has very high probability of finding the optimal solution in polynomial time with low computational complexity.

An EOQ Model for Non-Instantaneous Deteriorating Items with Power Demand, Time Dependent Holding Cost, Partial Backlogging and Permissible Delay in Payments

In this paper, Economic Order Quantity (EOQ) based model for non-instantaneous Weibull distribution deteriorating items with power demand pattern is presented. In this model, the holding cost per unit of the item per unit time is assumed to be an increasing linear function of time spent in storage. Here the retailer is allowed a trade-credit offer by the supplier to buy more items. Also in this model, shortages are allowed and partially backlogged. The backlogging rate is dependent on the waiting time for the next replenishment. This model aids in minimizing the total inventory cost by finding the optimal time interval and finding the optimal order quantity. The optimal solution of the model is illustrated with the help of numerical examples. Finally sensitivity analysis and graphical representations are given to demonstrate the model.