College of Charleston Home
  • About
  • Academics
  • Admissions
  • Campus Life
  • Honors College
  • Graduate School
  • Library
  • Athletics
  • Giving
the lifethe life

College of Charleston researchers train a computer to rate songs and even compose original music. Not bad for a box.

Artificial Art Critic

Open up your iTunes and check out your top 25 most played songs. What is it exactly about these two dozen plus songs that keeps you hitting the play button again and again? Is it the singer's style, the infectious melody, the funky beat? If pressed for an answer, you'd probably say, "They're just good!"

But that answer won't cut it for a team of computer scientists at the College of Charleston who are working at the unexplored intersection of artificial intelligence and aesthetics. Professor Bill Manaris and his team of undergraduate and graduate research assistants are attempting to design the world's first music search engine by training a computer to recognize aesthetic similarities between thousands of pieces of music and even decide which pieces are "good" and which are "bad."

At the core of their research is a complex piece of computing wizardry called an Artificial Neural Network (ANN). The ANN is programmed to sort through a database of music   and extract measurable values from each songs, things like pitch, duration, melodic intervals and harmonic intervals, etc. The ANN then takes that information and plugs it into various mathematical models that help it determine which two songs are the most alike.

Funded by generous grant from the National Science Foundation (NSF), Manaris and his staff are also training the ANN to make the same aesthetic judgments about music that humans make intuitively. One method they use is to take an existing public database of 15,000 pieces of classical music and pull out the 1,000 most downloaded songs and the 1,000 least downloaded songs. The assumption is that there's something inherently more "likable" or "aesthetically pleasing" about the 1,000 popular songs.

Using 156 different measurements for each song, Manaris' team has been able to tweak the ANN's mathematical models until it can decide with 91 percent accuracy whether each of the 2,000 songs falls into the "good" or "bad" category.

What they're creating, says Manaris, is essentially an "artificial art critic." This intelligent machine would be at the heart of a new kind of "Google for music" that spits out song suggestions that are aesthetically similar to the one you enter. The team already has a working prototype called Armonique, which you can demo for free.

Go to the SGER project site to learn more about this revolutionary technology. You can even listen to music composed by a machine.