This past week, I spent countless hours working on my Final Paper. I also spent a considerable amount of time working to correct errors and bugs in my code inside of MaxMSP.
This week I have dealt with working on improving my final paper and working on the MIDI mapping and audio of my project inside MaxMSP. I primarily worked on getting the audio format correctly to be able to loop and connect each file with the object associated in the three-dimensional space within MaxMSP. I secondly worked on the formatting of my paper; it needed to flow from topic to topic more also recognizing why it was relevant to the project as a whole.
This week mainly dealt with the writing of my final paper. My paper was mainly a mostly complete draft that worked for my initial goal and vision of SoundStroll 2.0. However, in its finality, it will be called SonoSpatial Walk and and it will be its completely own project. The added changes allow for generation of objects through MIDI, and hopefully other things will be allowed with the source code that was completed changed by myself as well. Hopefully, with the object generation, I will be allowed to work on the sound properties that play and loop through triggers. Also with the sound properties, I’m hoping I can resynthesize sound in real time using Fourier Transformations in order to complete change the sound as well.
This week, while I was not able to work on the coding aspect to my project, I did receive a synthesizer from Professor Forrest Tobey, and have been working extensively in MaxMSP to get it to react to my program.
Also, I have been working on my paper to get it ready for the 02/25/19 deadline.
This week, I was able to realize partial spatialzation in MaxMSP using the new tool that I had found previously, HOA Library. Right now I am trying to get my object driver to work to be able to create multiple figures. Also, Forrest wanted me to map the knobs of a MIDI keyboard to Max and the patch.
This week offered me a chance to get into detail with spatialization libraries:
Adapted from: https://github.com/darkjazz/qm-spatial-audio/wiki/Open-source-free-spatialisation-tools-around-the-web
What I decided to go for was was HOA Library, because HoaLibrary is a collection of C++ and FAUST classes and objects for Max, PureData and VST destined to high order ambisonics sound reproduction. It won “Le Prix du Jeune chercheur,” awarded by the AFIM in 2013. This library is free, open-source and made available by CICM, the research center of music and computer science of the Paris 8 University. WIth using that, I know that I can make a lot of edits to it, and many people have used it in a concert or installation setting. Now that it’s decided, I can work towards the connection between MIDI and the spatialization of objects.
This week dealt with the focus of the FFT and other conversion formulas. In recognizing that the Fast Fourier Transform is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT), you understand how that mainly relates to Fourier analysis and converting a signal from its original domain (often time and/or space) to a representation in the frequency domain and vice versa. With my current project, I am trying to understand how this relates between the signal processing and MIDI, and how that will allow me to transfer the signals into the shapes that I will be making in Jitter and MaxMSP. In understanding the multiple ways in which the transforms work I can use the signals that I get by decomposing a sequence of values into components of different frequencies, which will then generate multiple sounds, and that hopefully can be somewhat composed into a song or piece that is made through the patch. Currently SoundStroll 2.0, but it might end up being a whole different concept and Max patch when it’s done. In understanding what changes are being made, the new product could be thought of as a completely different three-dimensional audiovisual spatialization software. My main work has been with working to create a different audiovisual projection whilst working with MIDI and other signals to get different spatialization in MaxMSP, this was concurrently added to my paper, and it was also worked on in MaxMSP and Jitter.
This first week, I tried to focus
my project into something tangible. Initially, I was not sure about how
I wanted to go about editing and making SoundStroll 2.0, and I had
talks with advisors who told me that I tried to achieve too much in a
short period and that I also needed to find a focus. Interestingly
enough, I think that working with and creating a bandpass filter, in
addition to other improvements, can produce different effects in
SoundStroll 2.0, and I’m curious to see how that might change
SoundStroll 2.0, regarding effects and three-dimensional objects that
are spatialized with the signals.
In understanding what needs to be done for the project, I still have many main steps:
1. Completely rehaul and add a new more advanced spatialization toolset/visualizer to work with the updated software.
2. Add vocal recognition through a speech processing patch that’s made by myself to the updated Max project and get them to work together.
3. Add new additions that track the Fourier Transforms for analysis that allow the original and added vocal recognition through a speech processing patch.
4. Make sure that vocal, MIDI, and OpenGL components are working so that objects are spatialized through vocal component or the MIDI interface.
5. Potentially add virtual reality/VR component to the project to let you traverse the world.
In finality, I want SoundStroll 2.0 to be a logical successor to its former self. SoundStroll 2.0, or the new edition that I plan to create will use Fourier Transforms for vocal recognition through a speech processing application, which can also be analyzed for more profound findings audially or mathematically in relation to sound; a different, more advanced spatialization toolset to create a scene that you can traverse; and a more connected environment to create different ways to traverse with Max for Live, Ableton Live, and Reason, which will allow new sounds and objects to be triggered and spatialized through SoundStroll 2.0.