Abstract: Character recognition is the process of converting a text image file into editable and searchable text file. Feature Extraction is the heart of any character recognition system. The character recognition rate may be low or high depending on the extracted features. In the proposed paper, 25 features for one character are used in character recognition. Basically, there are three steps of character recognition such as character segmentation, feature extraction and classification. In segmentation step, horizontal cropping method is used for line segmentation and vertical cropping method is used for character segmentation. In the Feature extraction step, features are extracted in two ways. The first way is that the 8 features are extracted from the entire input character using eight direction chain code frequency extraction. The second way is that the input character is divided into 16 blocks. For each block, although 8 feature values are obtained through eight-direction chain code frequency extraction method, we define the sum of these 8 feature values as a feature for one block. Therefore, 16 features are extracted from that 16 blocks in the second way. We use the number of holes feature to cluster the similar characters. We can recognize the almost Myanmar common characters with various font sizes by using these features. All these 25 features are used in both training part and testing part. In the classification step, the characters are classified by matching the all features of input character with already trained features of characters.
Abstract: With the aging of the world population and the
continuous growth in technology, service robots are more and more
explored nowadays as alternatives to healthcare givers or personal
assistants for the elderly or disabled people. Any service robot
should be capable of interacting with the human companion, receive
commands, navigate through the environment, either known or
unknown, and recognize objects. This paper proposes an approach
for object recognition based on the use of depth information and
color images for a service robot. We present a study on two of the
most used methods for object detection, where 3D data is used to
detect the position of objects to classify that are found on horizontal
surfaces. Since most of the objects of interest accessible for service
robots are on these surfaces, the proposed 3D segmentation reduces
the processing time and simplifies the scene for object recognition.
The first approach for object recognition is based on color histograms,
while the second is based on the use of the SIFT and SURF feature
descriptors. We present comparative experimental results obtained
with a real service robot.