Open Access

Music Information Retrieval from a Singing Voice Using Lyrics and Melody Information

EURASIP Journal on Advances in Signal Processing20062007:038727

Received: 1 December 2005

Accepted: 10 September 2006

Published: 6 December 2006


Recently, several music information retrieval (MIR) systems which retrieve musical pieces by the user's singing voice have been developed. All of these systems use only melody information for retrieval, although lyrics information is also useful for retrieval. In this paper, we propose a new MIR system that uses both lyrics and melody information. First, we propose a new lyrics recognition method. A finite state automaton (FSA) is used as recognition grammar, and about retrieval accuracy was obtained. We also develop an algorithm for verifying a hypothesis output by a lyrics recognizer. Melody information is extracted from an input song using several pieces of information of the hypothesis, and a total score is calculated from the recognition score and the verification score. From the experimental results, 95.0 retrieval accuracy was obtained with a query consisting of five words.


Information TechnologyInformation RetrievalQuantum InformationRecognition MethodRecognition Score


Authors’ Affiliations

Graduate School of Engineering, Tohoku University, Aoba-ku, Sendai, Japan


  1. McNab RJ, Smith LA, Bainbridge D, Witten IH: The New Zealand digital library MELody inDEX. D-Lib Magazine 1997,3(5):4-15.View ArticleGoogle Scholar
  2. Jang J-SR, Lee H-R, Chen J-C: Super MBox: an efficient/effective content-based music retrieval system. Proceedings of the 9th ACM International Conference on Multimedia (ACM Multimedia '01), September-October 2001, Ottawa, Ontario, Canada 636-637.View ArticleGoogle Scholar
  3. Jang J-SR, Chen J-C, Kao M-Y: MIRACLE: a music information retrieval system with clustered computing engines. Proceedings of the 2nd Annual International Symposium on Music Information Retrieval (ISMIR '2002), October 2001, Bloomington, Ind, USAGoogle Scholar
  4. Kosugi N, Nishihara Y, Sakata T, Yamamuro M, Kushima K: A practical query-by-humming system for a large music database. Proceedings of the 8th ACM International Conference on Multimedia (ACM Multimedia '00), October-November 2000, Los Angeles, Calif, USA 333-342.View ArticleGoogle Scholar
  5. Heo S-P, Suzuki M, Ito A, Makino S: An effective music information retrieval method using three-dimensional continuous DP. IEEE Transactions on Multimedia 2006,8(3):633-639.View ArticleGoogle Scholar
  6. Rabiner LR, Juang B-H: An introduction to hidden Markov models. IEEE ASSP Magazine 1986,3(1):4-16.View ArticleGoogle Scholar
  7. Ozeki H, Kamata T, Goto M, Hayamizu S: The influence of vocal pitch on lyrics recognition of sung melodies. Proceedings of Autumn Meeting of the Acoustical Society of Japan, September 2003 637-638.Google Scholar
  8. Sasou A, Goto M, Hayamizu S, Tanaka K: An auto-regressive, non-stationary excited signal parameter estimation method and an evaluation of a singing-voice recognition. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '05), March 2005, Philadelphia, Pa, USA 1: 237-240.Google Scholar
  9. Hosoya T, Suzuki M, Ito A, Makino S: Song retrieval system using the lyrics recognized vocal. Proceedings of Autumn Meeting of the Acoustical Society of Japan, September 2004 811-812.Google Scholar
  10. Hu N, Dannenberg RB: A comparison of melodic database retrieval techniques using sung queries. In Proceedings of the ACM International Conference on Digital Libraries, July 2002, Portland, Ore, USA. Association for Computing Machinery; 301-307.Google Scholar
  11. Ghias A, Logan J, Chamberlin D, Smith BC: Query by humming: musical information retrieval in an audio database. Proceedings of the 3rd ACM International Conference on Multimedia (ACM Multimedia '95), November 1995, San Francisco, Calif, USA 231-236.View ArticleGoogle Scholar
  12. Liu B, Wu Y, Li Y: A linear hidden Markov model for music information retrieval based on humming. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '03), April 2003, Hong Kong 5: 533-536.Google Scholar
  13. Birmingham W, Pardo B, Meek C, Shifrin J: The MusArt music-retrieval system: an overview. D-Lib Magazine 2002.,8(2):Google Scholar
  14. Meek CJ, Birmingham WP: A comprehensive trainable error model for sung music queries. Journal of Artificial Intelligence Research 2004, 22: 57-91.Google Scholar
  15. Raphael C: A graphical model for recognizing sung melodies. Proceedings of 6th International Conference on Music Information Retrieval (ISMIR '05), September 2005, London, UK 658-663.Google Scholar
  16. Mellody M, Bartsch MA, Wakefield GH: Analysis of vowels in sung queries for a music information retrieval system. Journal of Intelligent Information Systems 2003,21(1):35-52. 10.1023/A:1023501817044View ArticleGoogle Scholar
  17. Cambridge University Engineering Department : Hidden Markov Model Toolkit.
  18. Ito A, Heo S-P, Suzuki M, Makino S: Comparison of features for DP-matching based query-by-humming system. Proceedings of 5th International Conference on Music Information Retrieval (ISMIR '04), October 2004, Barcelona, Spain 297-302.Google Scholar
  19. Loscos A, Cano P, Bonada J: Low-delay singing voice alignment to text. Proceedings of International Computer Music Conference (ICMC '99), October 1999, Beijing, ChinaGoogle Scholar
  20. Leggetter CJ, Woodland PC: Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models. Computer Speech and Language 1995,9(2):171-185. 10.1006/csla.1995.0010View ArticleGoogle Scholar
  21. Boersma P, Weenink D: praat. University of Amsterdam,


© Motoyuki Suzuki et al. 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.