This first week, I tried to focus
my project into something tangible. Initially, I was not sure about how
I wanted to go about editing and making SoundStroll 2.0, and I had
talks with advisors who told me that I tried to achieve too much in a
short period and that I also needed to find a focus. Interestingly
enough, I think that working with and creating a bandpass filter, in
addition to other improvements, can produce different effects in
SoundStroll 2.0, and I’m curious to see how that might change
SoundStroll 2.0, regarding effects and three-dimensional objects that
are spatialized with the signals.
In understanding what needs to be done for the project, I still have many main steps:
1. Completely rehaul and add a new more advanced spatialization toolset/visualizer to work with the updated software.
2. Add vocal recognition through a speech processing patch that’s made by myself to the updated Max project and get them to work together.
3. Add new additions that track the Fourier Transforms for analysis that allow the original and added vocal recognition through a speech processing patch.
4. Make sure that vocal, MIDI, and OpenGL components are working so that objects are spatialized through vocal component or the MIDI interface.
5. Potentially add virtual reality/VR component to the project to let you traverse the world.
In finality, I want SoundStroll 2.0 to be a logical successor to its former self. SoundStroll 2.0, or the new edition that I plan to create will use Fourier Transforms for vocal recognition through a speech processing application, which can also be analyzed for more profound findings audially or mathematically in relation to sound; a different, more advanced spatialization toolset to create a scene that you can traverse; and a more connected environment to create different ways to traverse with Max for Live, Ableton Live, and Reason, which will allow new sounds and objects to be triggered and spatialized through SoundStroll 2.0.