top of page

Progress Report

 

Progress Overview

 

The project is progressing smoothly! As outlined in the previous progress report, we divided the functionality of our product and the work needed to implement this functionality into three stages.  At the time of the first progress report, we had implemented "Stage 1: Tuning a Single Note" and "Stage 2: Tuning a Series of Notes". We have now completed the final stage, "Stage 3: Tuning a Guitar Chord".  For the next two weeks, our group's focus will be on fine-tuning various aspects of our project, preparing our project for the final presentation, and preparing ourselves for the final presentation.  The second of the three tasks includes making our code easily runnable from one location and with the least amount of clutter produced and required in the Matlab workspace.  The third task requires us to setup a demonstration which best showcases all of our project’s abilities, and to practice our delivery of this demonstration. 

 

We have planned a timeline for the next two weeks that will allow us to finalize everything on time.

 

  • December 3rd - Fine-tune Stage 3: Tuning a Guitar Chord

  • December 5th - Clean up MATLAB code, first draft of presentation slides

  • December 6th - Finalize slides and practice presentation run-through

  • December 7th - Final presentation in class

  • December 9th - Finalize website

  • December 10th - Website and final project due

 

We have had a few minor “bumps in the road” here and there, but have gotten over or around them by finding solutions. One recent bump we ran into had to do with correcting the pitch of notes played in a chronological sequence.  Due to the fact all three of our group members live in a causal world, we have no way of knowing the length of any note being played until it is done being played. Additionally, we would like our pitch correcting abilities to be robust to changes in tempo. We originally ran very long, windowed FFTs on our input sequence, but found the sound of the reproduced and corrected sequence to be distorted and blurred.  The solution to this problem was obvious—decrease the FFT length.  However, finding the “golden” FFT length for the highest quality sound reproduction was not so obvious.  If we included too few samples in the FFT, it wouldn't be able to identify pitch accurately. In fact, we found that the optimal rate depends on the tempo (or rate of change in frequency) of the input sequence of notes.  We settled on an FFT length that matched best with the average tempo of music.  This is a bump we may revisit while fine-tuning our project.

 

What we found the coolest in the progression of our project is when we tested the pitch correcting abilities of our project with Kyle Harman’s voice.  Kyle H. is a talented singer in the University of Michigan Men's Glee Club (see picture below), and he sang into the recording microphone slightly off pitch (intentionally, of course :) ).  Our project then replayed his voice in a more correct pitch.  The noticeable difference was what was exciting.  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


 

As a whole, our project is moving along smoothly.  We have the same general design for our project as we did setting out, and our plan to progress in the next two weeks remains the same.

 

DSP tools

S

We have employed a variety of DSP tools in order to implement our auto-tuner:

 

In-class

 

1. The Discrete Fourier Transform (DFT) - In this project, we use the fast Fourier transform (FFT), an algorithmic implementation of the DFT, in order to take our musical signal into the frequency domain.  This is an essential step in our auto-tuning algorithm; in the frequency domain, we are able to view the fundamental frequency of our signal, determine where this frequency should lie in a tuned note, and shift the frequency appropriately 

 

2. The Inverse Discrete Fourier Transform - Another key DSP algorithm we used was the inverse FFT, allowing us to take a note that had be 'tuned' in the frequency domain, and  recover its time domain representation, which is what the human ear recognizes as music

 

3. Pass-band filters - In order to tune a guitar chord, we had to be able to manipulate and tune each note of the chord individually. We approached this using pass-band filters. We created a filter H(w) in the frequency domain that was high for our note of interest and its partial frequencies, and low for the fundamental frequencies and partials of the other notes.  We then multiplied this filter with our frequency signal (i.e. convolved the filter with the time domain signal) to isolate our note of interest.

 

4. Properties of linearity - Because our filters and frequency band manipulations represented linear systems, we were able to break a chord into its component notes, apply a filter to each of the notes, and then add the filtered notes signals together to recover the original, tuned chord. 

 

Out-of-class

 

5. Resampling signals to change frequency -  To implement the actual process of "tuning" a note (changing a note's frequency for the fundamental and all partials), we used MATLAB's resample() function. This function takes a signal and either stretches or compresses it by a given frequency factor. This effectively increases or decreases the frequency of the sample without changing the shape of the sample's waveform, which implements the frequency change that we need.

 

6. Peak detection - Frequency peaks (local maxima) are detected in our project using MATLAB's findpeaks() function. This function takes in a vector (we inputted our DFT spectrum) and a minimum threshold for something to be considered a "peak" and outputs a vector containing the magnitudes of each peak and a vector containing their locations. In our peak-detecting function we find the peaks of our frequency spectrum using findpeaks() and then remove the peaks that are octaves of one another (see music theory section) and those that are too low or too high in frequency. The function eventually outputs the three frequencies that represent the three fundamental notes of our chord.

 

7. Music theory - Music Theory is an important element in our project. Our project tunes notes to what is known in Music Theory as 12-Tone Equal Temperament. 12-Tone Equal Temperament means that the frequency distance between notes is the same for every note on a base-2 logarithmic scale. Our project takes in arbitrary frequencies and essentially discretizes them into the standard frequencies of 12-Tone Equal Temperament. The functions in our project also understand the concept of octaves and harmonic partials. From Music Theory we know that two notes are octaves of one another if their frequencies are related by a power of two. We programmed our functions to recognize when two frequencies are octaves of one another and treat them accordingly. A musical note played by an instrument is typically comprised of a fundamental frequency and various harmonic partials, which are multiples in frequency of the fundamental frequency. Our function that separates notes accounts these partials.

 

bottom of page