Tirtaatmadja, Kevin and Rusyadi, Rusman (2016) Designing and Constructing an Autonomous Human Following Robot with Speech Recognition System. Bachelor thesis, Swiss German University.
|
Text
Kevin Tirtaatmadja 11112011 TOC.pdf Download (1MB) | Preview |
|
Text
Kevin Tirtaatmadja 11112011 1.pdf Restricted to Registered users only Download (912kB) |
||
Text
Kevin Tirtaatmadja 11112011 2.pdf Restricted to Registered users only Download (3MB) |
||
Text
Kevin Tirtaatmadja 11112011 3.pdf Restricted to Registered users only Download (7MB) |
||
Text
Kevin Tirtaatmadja 11112011 4.pdf Restricted to Registered users only Download (10MB) |
||
Text
Kevin Tirtaatmadja 11112011 5.pdf Restricted to Registered users only Download (343kB) |
||
|
Text
Kevin Tirtaatmadja 11112011 Ref.pdf Download (555kB) | Preview |
Abstract
The objective of this thesis is to develop,design and construct an autonomous human following robot with speech recognition system in the noisy indoor environment to substitute trolley into automatic one to help disabled and old people by reducing their effort while shopping[1]. The purpose of developing speech recognition system into human following robot is to substitute the vision navigation if it fails to detect and track human. The speech input will be processed by pocketsphinx speech recognition and compiled by using Qt Framework. Pocketsphinx will be tested its accuracy and will be compared with Simons in terms of accuracy. The processed speech data input send serially by Firmata Protocol Communication to Arduino and translated into DC Motors movement. And also, the speech synthesizer Festival is installed as a feedback system for each command received and situation experienced by the robot. To avoid any obstacles,collisions and safety, then range finder sensor installed to the mobile robot system. The speech and vision navigation program will be integrated by using UDP Communication so the robot able to change its state between two navigation. Based on the testing result, pocketsphinx is reliable as a speech recognition in noisy indoor environment for both male and female speakers with total accuracy 85% and also the Sharp IR sensor is accurate enough as a range finder sensor with accuracy 95% and also the integration between speech and vision is good so the robot able to tackle the movement problem if the vision navigation fails.
Item Type: | Thesis (Bachelor) |
---|---|
Uncontrolled Keywords: | Autonomous Mobile Robot; Speech Recognition; PocketSphinx; Arduino; Qt Framework |
Subjects: | T Technology > TJ Mechanical engineering and machinery > TJ163.12 Mechatronics T Technology > TJ Mechanical engineering and machinery > TJ210 Robots (General) T Technology > TJ Mechanical engineering and machinery > TJ223 Microcontrollers > TJ223.1 Arduino (Programmable controller) |
Divisions: | Faculty of Engineering and Information Technology > Department of Mechatronics Engineering |
Depositing User: | Atroridho Rizky |
Date Deposited: | 12 Nov 2020 13:22 |
Last Modified: | 12 Nov 2020 13:22 |
URI: | http://repository.sgu.ac.id/id/eprint/1418 |
Actions (login required)
View Item |