- Name of Your Project Iterative solvers for systems of Linear Equations
- What research topic/question your project is going to address? The effectiveness of various iterative methods at finding solutions to systems linear equations with various characteristics. Taking experimental / numerical analysis perspectives.
- What technology will be used in your project? C++ for implementing the solvers.
- What software and hardware will be needed for your project? Cluster, or some other computer with sufficient memory for the testing.
- How are you planning to implement? Choose a class of iterative solvers and test them against various types of matrices.
- How is your project different from others? What’s new in your project? It is closer to some of the more theory oriented projects, but leans heavily towards applied mathematics.
- What’s the difficulties of your project? What problems you might encounter during your project? The theory of iterative solvers is quite heavy on numerical analysis, which might limit the scope of the project to a a further restricted class of solvers.
Buying tickets of popular concerts: the application imitates real users to buy concert tickets on the website. Users can set it up before the tickets opening date. As long as the tickets are open to sell, the application will immediately buy them. If the tickets are sold out, it keeps reloading the webpage until there are new tickets available.
One time I wanted to buy a concert ticket, but that concert was very popular, and tickets were sold out immediately. Then I kept reloading the website, hoping other people would cancel their orders. But I couldn’t keep reloading the page every second for the whole day. Then I thought if I can have software to do that for me. I did research and found that some web browser automation tools, for example, Selenium, can imitate operation like real users on a web browser. I searched online and found that most of this kind of software buy train tickets instead of concert tickets. The primary technology I will use is Selenium. And I will also need to learn how to read page source code. I will primary write a python script with Selenium to imitate real users buying a ticket online. Selenium can provide a web driver on Google Chrome to go to the target website. And the driver will locate element by tag names, element ID, XPath, class names and etc., and it can do automatic click and input.
But there are several difficulties in my project. The first one is money security. Some ticket websites require payment immediately while making the order. Therefore, users need to provide their website account and bank card information beforehand. I need to be careful about bank card security. The second problem is that this software is website-specific. Different websites require different processes to buy tickets. I need to write different codes to deal with different sites.
The third problem is that some websites require complicated verification on user login. I don’t know if web browser automation can deal with a very complicated verification. The last issue is that I don’t know if the scope of this project is big enough. I can have my software operates for several famous ticket websites and probably can have the mobile APP version.
- Name of Your Project
Penetration testing to show the weaknesses of businesses
- What research topic/question your project is going to address?
What results can be gained from businesses investing in someone do penetration
- What technology will be used in your project
Personal computer & Network Adapter
- What software and hardware will be needed for your project?
Kali Linux OS, Virtual Box
- How are you planning to implement?
Using the Network Adapter and Kali Linux’s built in features, access to a specific network becomes easier. Goals are described at the beginning and the attack is centered around those goals.
- How is your project different from others? What’s new in your project?
Other than Byron Roosa who graduated a few years ago, I am the only student who has had a security interest so this makes my project unique from others. My project is centered around helping the community. A penetration tests sole purpose is to help a business detect security flaws.
- What’s the difficulties of your project? What problems you might encounter during your project?
An obvious difficulty with my project is the legality associated with hacking. Getting a business to allow me to do this would take some convincing. Learning to navigate Kali Linux correctly would also be a difficulty.
Name of Your Project?
What research topic/question your project is going to address?
My project aims to improve the current process of 3d printing that is used for mainstream industrial 3d printing.
What technology will be used in your project?
- A small computer (Raspberry Pi)
What software and hardware will be needed for your project?
- A plastic pixel grid made of either wood or metal.
- Raspberry Pi or similar small computer to control the opening and closing of the pixel grid.
- Plastic to print the 3d object out of.
- Motors that will control the vertical movement of the pixel grid.
How are you planning to implement?
My implementation includes a pixel grid that is controlled by a computer, the 3d object is converted into layers, the layer then controls the corresponding pixels the molten plastic flows through,. Based on the layer the pixels ‘open’ and ‘close’ to form the 3d object. The pixel grid moves up as it finishes a layer.
How is your project different from others? What’s new in your project?
My project is an entirely new concept. This process of 3D-printing does not exist. The current 3d printers consist of a nozzle through which the molten plastic flows, the nozzle then moves around forming the object which is very time consuming. My process uses a pixel grid that allows molten plastic to flow through the pixels forming the object.
What’s the difficulties of your project? What problems you might encounter during your project?
- Since this is an entirely new process of 3d printing, I will not be able to use pre-existing online resources.
- I have no experience working with a raspberry pi and I’m not familiar with hardware programming.
- My process relies on the density of the grid. The more ‘’pixels’’ my grid contains the more detailed the 3d object will be.
- Creating an extremely dense grid could be expensive and not viable in an academic setting.
- Name of Your Project
Breakdown of Mathematical Proofs Using Natural Language Processing
- What research topic/question your project is going to address?
How available and useful current technology is to address completeness of mathematical proofs in intro/bachelor level mathematics courses.
- What technology will be used in your project?
Python Natural Language Processing libraries, and SwiftUI for the GUI.
- What software and hardware will be needed for your project?
Languages: Python, Swift
- How are you planning to implement?
The end goal would be to assist students taking undergraduate classes.
- How is your project different from others? What’s new in your project?
I have not seen/be able to find applications of natural language processing towards the field of algebra and mathematical proofs.
- What’re the difficulties of your project? What problems you might encounter during your project?
Finding a large enough dataset to work with, that is consistent. Creating a strict enough rule set to classify proof components by.
Predicting the winner of a NBA match
Usage of technology to help in the correct outcome of sports. The goal is to predict the winner of an NBA game using machine learning techniques. This is done on the based of the factors that have influence on the match and which ones are useful for the team in winning the match.
Machine Learning, Python
DDR4 8gb ram, 700 mb space for spider ide and project files, intel i3 7th gen processor an above.
No hardware required.
We will use this dataset to find out the entropy, which is nothing but a reward system that shall be used to calculate the probability of winning of the two teams playing the current match.
We will use various factors to calculate the entropy, for example, affect of injury of players in the outcome of a match, past performance of players in the season, home court or away court, past record and scores of players against the opposition team, grudges in between two players of opposition team, record of the coach, etc.
The project brings a unique way to predict the winner which can be helpful to lot of betting agencies, match analysers. We shall use our algorithm while the match is being played to dynamically take into effect of any possible injuries or fouls that the players may commit during the match. This will give a better number to calculate the probability of winning of both the teams.
The difficulties encountered would be getting the right accuracy level, trying different algorithms for the correct score, large quantity of data to access.
I have been working on editing my final draft for my proposal and updating it with new information that I discussed with Xunfei about the direction of the project. I have been adding information on the different directions I can take this project.
Submitted Proposal Draft 2. Worked on presentation. I have decided that instead of just generating the geometry of the levels it will be a lot more interesting and compelling to also figure out how to generate the puzzles within the level. This will be tricky but I can think of a few ways to to this. The first and easiest one would have some chunks fully contain a simple puzzle, while this would be effective and ensure complete-ablility it would get repetitive fairly quick and not be very challenging. The second method would be to have meta data contained within each chunk, and have a more developed chunk selection process. such as this chunk contains a 4 block vertical jump, so before there is another chunk like this there must be a chunk that contains a crate. the last method that I thought of would be another algorithm that would loop over the geometry of the level and place puzzles like that. this would be quite tricky but it would simplify the generation of the geometry.
I have finally submitted the first draft of my proposal for PCG using ORE in my video game. I am excited to keep working on it, Initial results are promising.
I have been continuing work on my first draft of the proposal, progress is slow but I will eventually get there. I have been doing a little work to try and improve my simplified ORE algorithm but I think I will have to postpone this work so that I can focus on my proposal.
This week, I spent time figuring out how to make the software publication ready, and discussed with Craig on whereabouts of the server and database
This week I have been:
- Practicing presenting with the poster.
- Look up additional ways to explore my project.
I have designed a simplified ORE algorithm. I created a chunk library using ascii characters, only 15 different chunks. It is a simplified ORE algorithm because each chunk has 2 anchors, so there is only ever one place for the extension to take place. It also randomly selects the chunks so there is no selection process. While this algorithm is simple, I have been able to generate levels with great diversity. One of the parameters I feed it is the number of chunks I want to stitch together. I find that 40 looks the best and that the more chunks you have the more variety. since there is only ever one extension point, it does not allow for branching which is one facet that make a game more enjoyable.
This week, I finalized my poster and prepared my paper for the evaluation draft submission. I also met with Craig to discuss taking the application live.
This week I spent working on getting my poster ready for printing. Other than that I also spent several hours preparing my paper for the evaluation draft due Friday. Not there yet but making progress.
This is a mega update on my to make up for several weeks of missing updates.
- I did a preliminary cost analysis for my first idea, which would result in either a budget of $400 or $3500 depending on what hardware i went with.
- I applied for the Yunger Fellowship as a funding source to be able to afford the hardware needed for my first project idea. I did not end up getting it, unfortunately, and so called in a favor for an alternative source of funding.
- In the midst of my research, I discovered that as near as I can tell, no one has ever published research on applying Augmented Reality to the Management aspect of theater, only the artistic side.
- I decided to combine my first and second ideas and switch from using Microsoft Hololens as the hardware to Leap Motion’s Project NorthStar, an open-source Augmented Reality device, after Microsoft decided that they couldn’t care less about small independent developer types such as myself.
- After settling on NorthStar, i started delving into the Unity API’s as well as sourcing hardware and 3D printing capabilities.
- Currently, I am waiting for the Leap Motion sensor to arrive in the mail so i can do concrete work with the APIs, and for my test print to finish to determine if i need to buy the recommended filament or if the one we use will suffice.
This week I have been:
- Discussing with my advisor about results of my project.
- Editing the final version of the poster.
my literary review has definitely guided which direction I want to take my project in. While doing research into the different kinds and application of PCG There are a lot of cool examples of the use of tile based such as in spelunky or rhythm based in others. I am most intrigued by Occupancy Regulated Extension(ORE), an algorithm that looks at where a play can exist in a level and branches out based on a predetermined library of level segments. This methods seems like the most applicable to my video game, or at least the geometry and it gives the users some control, what level segments go into the library. I have yet to find source code for it however.
I spent most of this week writing the most recent draft, and then putting together a first draft of the poster. Spent some time debugging, but my focus was on the paper and poster.
I have finally finished my literary review, and it has been super cool finding more examples of games that use PCG. I think It would be really neat if I could figure out a way to implement an algorithm in a way to make my game infinite! Its also amazing how far PCG has come from Rouge which was basically the first to now, were some games are much more complex and intricate.
This past week, I spent countless hours working on my Final Paper. I also spent a considerable amount of time working to correct errors and bugs in my code inside of MaxMSP.
This week I have been:
- Debugging some errors I ran into while writing the test harness.
- Continuing to work on my paper.
This week I continued debugging my code. I now have everything I need working working. It’s not as good as I think it can be, but it is fully operational. Spent a little time starting paper revisions and that’s next up.
This week ive been working on revising my paper according to the feedback i got and suggestions from Xunfei.
Ive been trying to polish out any remaining functionality issues and format it better for presentations.
Still polishing the paper. Been revising section by section and created a new diagram for the paper.
Need to fix an issue with the citations.
Will make time to try and tweak the experiment for the paper to see if I can get different result.
Talked to Dave and uploaded the outline for the proposal paper
This week, I started working on my poster, and revised my paper based on the feedback I received from Dave and Ajit
- The spring break week dealt with mapping MIDI control into the acoustics of the three-dimensional space, which I have successfully done.
- I have also successfully added an FIR based buffer into the program as well to track analytics. You use buffir~ object when you need a finite impulse response (FIR) filter that convolves an input signal with samples from an input buffer. With that, you can look at an FIR filter response graph in MaxMSP which calculates the filter response of the FIR in the main sound patch for a given number of samples (coefficients)
Created and ran experiment. Noted the results down.
Finished the first draft of the Capstone paper. Currently revising and polishing it.
This week I have been:
– Continuing experimenting with different hyperparameters of my models.
– Writing analysis based on the results.
– Continuing to work on the paper.
This week I turned in a new draft of my paper and fixed a few bugs. I am now ready to collect most of the data I’ll need.
This week I tested out the reader with student interface, and checked if self-check in and check-out worked properly. I also met with Craig and discussed plan to migrate the application to the server and perhaps have it ready for the EPIC expo.
This week I have:
– Reran the tests of my model and continued analyzing the results.
– Worked on my paper.
This week I spent most of my time working on the paper, and finished the implementation and design sections. I also started working on the administrator interface and got a good portion of it done.
This week I have dealt with working on improving my final paper and working on the MIDI mapping and audio of my project inside MaxMSP. I primarily worked on getting the audio format correctly to be able to loop and connect each file with the object associated in the three-dimensional space within MaxMSP. I secondly worked on the formatting of my paper; it needed to flow from topic to topic more also recognizing why it was relevant to the project as a whole.
This week I have been working on my paper both revising the submitted partial draft and adding the portions that were not done yet.
I have also done some work on my project itself. I have been doing a lot of refactoring. I made it so all the variations I am testing can be turned on and off via toggle buttons and all things that need to be initialized to run the patch are accessible in the main patch window instead of having to open the subpatches. I also revised portions of my project that had large repeated sections of code.
This week I have been:
– Continue with the implementation of my design.
– Set up a hyperparameter space to analyze how they affect the model.
– Test the model using different parameter combinations from that set.
– Continue with my paper.
This week I
-implemented feedback from the first draft
-started writing the rest of the paper
-did some minor debugging and commenting
Built a functioning, testable (not yet accurate) Neural Network that takes in my input of heuristic data and output a direction.
Since the puzzle states were strings, I wanted the output to be the value added to the index of the “_” to move to a new position but negative values cannot be used as targets or labels in neural network so instead I coded them using values 0-3 to represent each move.
I created a function that takes in the output of the neural network and converts it to a move for the puzzle.
Also know which specific layers I need to experiment with make my neural network be accurate.
- Found some more research papers to read which were related to the topic that I chose.
- Worked on the literature review.
In past week, I have spent time doing research to find interesting papers that I think will be related to my ideas. For each of my ideas there are at least five papers to take a look on at thee moment. However, for the last idea about a mobile application for time scheduling, I could only find some topics about effective time scheduling and mostly are just about general mobile application development. I have also read some paper for the first pass in order to briefly know what the authors were proposing in their papers and how well are those paper fit to my topics. I am also looking for available data sets for my first two ideas.
I have continued working on my annotated bibliography, but I have not finished it yet. I am finding it hard to focus on and be motivated for a project that I can not physically work on. I am still missing a couple sources for the annotated bibliography and I have been a bit lazy about looking for them. I think I will start by looking at other papers that I have found citation’s.
This week mainly dealt with the writing of my final paper. My paper was mainly a mostly complete draft that worked for my initial goal and vision of SoundStroll 2.0. However, in its finality, it will be called SonoSpatial Walk and and it will be its completely own project. The added changes allow for generation of objects through MIDI, and hopefully other things will be allowed with the source code that was completed changed by myself as well. Hopefully, with the object generation, I will be allowed to work on the sound properties that play and loop through triggers. Also with the sound properties, I’m hoping I can resynthesize sound in real time using Fourier Transformations in order to complete change the sound as well.
Over this week I worked on finishing a first draft of my paper. Also I worked on debugging code and trying to install some packages.
This week I:
-worked on my paper draft
-worked on fixing bugs in my software
For this week, I have been:
– Finish writing the first draft of my paper.
– Continue reading ways to determine implicit ratings from the data.
– Test and analyze the initial results of the recommender system.
Read the CS papers help documents to understand how to read and understand the long papers.
Finding difficulties in searching for papers directly related to my ideas. Read a couple of papers according to the instructions which helped understand the material.
Met with Dave and Ajit to discuss the 3 ideas. The first 2 ideas were supported and I received some extra information about some features I can add to the project.
ACM membership is set- up and I am trying to refine the 3 ideas by looking up CS papers and discussing with fellow peers.
This week I finished implementing a rough version of the student user interface. I spent a considerable time discussing the logic behind to student check out and check and what measures were necessarily to put in. I received feedback from Ajit on the design, and modified my approach based on that.
This week I have been:
- Working on the first draft of my paper.
- Reading a research paper that use the same dataset as my project to derive a formula to get the implicit rating from the song play count.
- Writing an initial version of the collaborative filtering algorithm, will try to test it next week.
I have been a little bit behind on synthesizing my articles into the annotated bibliography, I mostly have just been trying to collect more and more articles. The gesture control Idea and the educational app idea seem a little difficult to complete in a semester of work so I think I will probably continue forward with the PCG for my video game, It is something I am definitely passionate about, I cant say the say for the others.
-I created a large variation to my algorithm that spacializes the visualization based on the ratios just intonation ratio instead of being directly correlated to the frequencies. This involved implenting a new module that calculates the just intonation ratio and scales the sine wave visuals to that.
-I created tables to use in my paper draft and started turning my outline into my draft.
This week, while I was not able to work on the coding aspect to my project, I did receive a synthesizer from Professor Forrest Tobey, and have been working extensively in MaxMSP to get it to react to my program.
Also, I have been working on my paper to get it ready for the 02/25/19 deadline.
I’m still trying to get the data over. I’ve been working on it for like 2 to 5 hours everyday but errors keep arriving. I’ve also researched about algorithms and have a better idea of my project scope.
I tested that the last of my dependencies is on the cluster and functional. Started implementing the second major piece of my approach, SURF. Taking a little longer than expected but not a major delay.
Changed a piece of my pipeline and created some issues. Working on fixing those, and hopefully improving the overall functionality in the process.
Managed to have Keras, an open source network library, installed in Jupyter.
Currently focusing on building a sample neural network, adjusting the data into a format that can be used by the network, iterating on the architecture required for the Neural Network and writing a draft of the first few sections of the Capstone paper.
This week I discussed the structure of my paper with Ajit, and received feedback on how to explain the design and implementation sections. I also started working on implementing the student user interface.
For this week, I have been looking for research papers on the three topics that I have chose, specifically for the Text Categorization and the Air Quality Monitoring system. Two things I found out:
– Air Quality Monitoring system:There have been many research on a low-price air quality system, some of which use the same technology as I do. This gives me two advantages: First is that I can reference and study those paper for my research. Secondly, I can compare the results I made on my model with that of other researchers so that my model monitoring system would be the same as the others (which would indicate that my model is working correctly).
– Text Categorization: It has been a research and application topic for over a decade. Many researchers have made considerable amount of progress on this topic, which provides me with more insight into the way to approach, which model should I use, etc… Furthermore, I have not found any documents about the application of Text Categorization on Social Media. Therefore my application can be a newly added topic. One thing that I need to worry about is how I am able to get the post and analyze them. Also, I need a pre-classified dataset to train my model in, which I haven’t found. Other than that, I think this can be a great topic to look into.
This week, I was able to realize partial spatialzation in MaxMSP using the new tool that I had found previously, HOA Library. Right now I am trying to get my object driver to work to be able to create multiple figures. Also, Forrest wanted me to map the knobs of a MIDI keyboard to Max and the patch.
I have started collecting papers on my three ideas.
There are many different sources that have used gesture controlled navigation in the past so I think it will be hard to differentiate from them, but most of them seem to have to use another device. There are a couple that are just controlled by a camera, this would be a neat way to solve this problem, and wouldn’t require people to buy anything.
For my second Idea of PCG for my video game, I have gathered sources where people use pcg to remake games or even make new ones. None of them seem super similar to what I would like to implement. I have also found a few good overviews and lit reviews of the subject. These will come in handy.
The papers I found for my last idea, of an educational app for ultimate frisbee, mostly just include the theorys and methods for educational applications. these include apps such as duo lingo and other language learning sources. I figure a lot of this could be mapped over to any educational application. It seems like a lot of the problems in this field deal with retention.
I have been trying to get the sensor to give me significant results and have had a lot of trouble with that. I mostly did research and experimentation to try and get it to work.
This week I finished my implementation of the SIFT algorithm, starting to work on implementing SURF next.
Also looking ways to organize images to improve accuracy. Currently planning to get keypoints, then organize photos based on similar keypoints rather than locality.
This week I did the following tasks:
-Met with Dave about the structure of my outline and my experimental results section
-Met with Forrest to get feedback on my work.
-Began to work on a new variation of my algorithm based on Forrest’s feedback
-Read chapter 5 in writing for computer science
These are the things I did for this week:
– Read Chapter 5 and did the quiz.
– Reworked the representation of the 2D matrix since the original method uses too much memory and not very practical.
– Started implementing the collaborative filtering algorithm for music recommendation.
I have talked with Ajit about all three ideas and received some feedbacks and suggestion. Also, I have started finding research papers that are related to my ideas to read for next week.
This week I finalized the schema for the database with Ajit, and familiarized myself with the PostgreSQL commands after receiving the log in information from Craig. I faced an unexpected challenge with the ordering of the RFID device from Ebay. Instead I researched for two days and found a few other cost friendly options in the US, and have proposed to the department to purchase one of them.
I am planning to finish all the software end of the project by the time I get my hands on the device!
I decided to pivot on my ideas a little bit because 2 of them were already solved and I didn’t feel that I had anything to add to the field. I have found papers and starting to read them on my new ideas, ideas listed below.
Idea 1 : Gesture Controlled Mouse and Keyboard
Description: The Idea is to fully replace the keyboard and mouse of a computer by using a 2D camera to track hand motions. I really like the idea of incorporating the feature of swipe to text as seen on some mobile phones, this could possibly increase typing speed.
Idea 2 : Procedural Level generation for 2D platform game
Description: Last semester I built a Game in Unity with 2 friends, we have continued to work on our game with the intention that one day we sell it. A large part of this process will be the generation of new levels. One idea is to develop an algorithm that procedurally generates the levels. While procedurally generated games are fairly common, procedurally generated platformers are far less common. There are a few challenges involved that I think would make this interesting.
Idea 3: Educational app for learning the rules of Ultimate Frisbee
Description: My other new idea, is to create an app that is maybe similar to other educational apps such as Duo-lingo, but with the intention of teaching the rules of ultimate frisbee. Since it is Self-refereed sport it is much more important that the players know the rules themselves. However these rules in the rule book are very wordy and hard to follow sometimes, this would provide an easily accessible way to learn the rules without having to sit down and read a book.
This week I made a first draft of an outline for the paper. Also worked on implementing the first of three algorithms I plan to use in my research, Scale Invariant Feature Transformation. While not completely finished I have most of the framework complete, and am just about where on schedule.
This week offered me a chance to get into detail with spatialization libraries:
- Jamoma – C++ and Max/MSP general purpose audio and spatialization toolset with standard interfaces, requires custom installation depending on the version of Max though. This library is needed to use most of Tom’s Max patches.
- NASA Slab – (Older) open source project for testing spatial auditory displays, requires registration to NASA open source archive.
- CICM Higher order ambisonics library – SuperCollider(under development), CSound, Pd, Faust, oFx, Max/MSP and C++ archive of useful ambisonics stuff, renders down to binaural well but computationally quite intensive. This library is required to use most of Tom’s Max patches.
- Sound Field Synthesis Toolbox for MATLAB – Python version exists as well btw. Sound Field Synthesis Toolbox (SFS) for Matlab/Octave gives you the possibility to play around with sound field synthesis methods like Wave Field Synthesis (WFS), or near-field compensated Higher Order Ambisonics (NFC-HOA). There are functions to simulate monochromatic sound fields for different secondary source (loudspeaker) setups, time snapshots of full band impulses emitted by the secondary source distributions, or even generate Binaural Room Scanning (BRS) stimuli sets to simulate WFS with the SoundScape Renderer (SSR).
- MIAP – Max/MSP objects, not spatial audio per se but pretty cool. More spatial parameter space exploration, though the binaural example is in the pack.
- Octogris – OSX DAW 8 channel spatialization plugin
- Spatium – Plugin (AU), Max/MSP, Standalones (OSX). Modular open source software tools for sound spatialization: renderers, interfaces, plugins, objects. Got some nice processing based physic interactions for spatial control.
- ambiX – Ambisonics spatialization Plugin compatible with Reaper, Ardour, MaxMSP, Bidule or as standalone applications with Jack.
- HOA – Higher Order Ambisonics (HOA) resources for Pure Data and Max from Paris Nord university.
- ATK – Ambisonic Toolkit for Reaper and SuperCollider.
- Sonic Architecture – resources for ambisonics in Csound and the blue environment from Jan Jacob Hofmann.
- Iannix- a graphical open source sequencer for digital art. It requires sound making software or hardware connected to the sequencer. The sequencer sends instructions (e.g.OSC) and allows to create 2D and 3D scores programming the behavior of sliders and triggers.
- Zirkonium – tool from zkm to spatialize music.
- NPM – web audio classes for Ambisonic processing FOA and HOA.
- [omnitone] (https://github.com/GoogleChrome/omnitone) – spatial audio on the web – by Google.
Adapted from: https://github.com/darkjazz/qm-spatial-audio/wiki/Open-source-free-spatialisation-tools-around-the-web
What I decided to go for was was HOA Library, because HoaLibrary is a collection of C++ and FAUST classes and objects for Max, PureData and VST destined to high order ambisonics sound reproduction. It won “Le Prix du Jeune chercheur,” awarded by the AFIM in 2013. This library is free, open-source and made available by CICM, the research center of music and computer science of the Paris 8 University. WIth using that, I know that I can make a lot of edits to it, and many people have used it in a concert or installation setting. Now that it’s decided, I can work towards the connection between MIDI and the spatialization of objects.
Here are my ideas for topics! Another post with some cost analyses is coming soon!
Topic Name: Real-time management using Augmented Reality
Topic Description: Examine the applications of Head-Mounted augmented reality displays such as HoloLens or Project Northstar in real-time management scenarios like Theatre Stage Management or NASA rocket launch management and implement a basic proof of concept software to eventually be used in the Theatre department as part of my Theatre Capstone.
Topic Name: Using real-time spacial mapping to improve calling for stage managers during performances
Topic Description: Examine the feasibility of using technologies such as Kinect 2.0 in the area of theatre to allow stage managers to keep better track of the positions etc of their actors, allowing them to make more accurate cue calls when their vision of the actors might be obscured, and implement a proof-of-concept application for use in my Theatre Capstone.
Topic Name: Using micro controllers to facilitate cross-device communication and control
Topic Description: Use Arduinos or Raspberry Pis to allow two or more very different devices to be controlled by another device. For example, allow Qlab on a computer and cues on a light board to be controlled from a single application running on a different computer. If feasible, make a proof-of-concept for use in my Theatre Capstone.
This week I accomplished several things:
-Reconfigured the way my patch combines matrices to avoid issues I was having w crossfade
-implenented horizontal movement after note press
-Organized parts of my project into sub-patches and cleaned up some stuff
-Implemented envelopes connected to sound and video out
-Created an outline for my paper
This week I haven’t been able to get as much work done as I planned on. These are things that I did:
– Reread the proposal to answer some questions in the outline.
– Finish writing the outline.
– Make some minor changes to the program to reformat the data as suggested by Xunfei.
– Tried to run the program with the large dataset but wasn’t successful.
This week I talked to Dave about the delay on my database. I am waiting to hear back from Charlie andor Craig so I can start working on building the database and connecting the dots. Until then I have read up on some more algorithms, specifically Reddits and ELO’s. I am also writing up my outline for CS488 and cleaning up some paragraphs from last years paper.
Built a prototype testing agent for the Capstone that should in theory take in a file of problem states and go through them with each Heuristic(Neural Network not yet ready) then output the solution size, number of nodes visited and number of nodes that are waiting for a visit. Haven’t gotten to properly test this one so will need to make time for that.
Tried writing a high level outline of my Capstone.
Further deliberated on my 3 ideas, I really enjoy the idea about replacing the keyboard and mouse set up with a gesture recognition system from a 2D camera, most gesture recog systems require another device that one must buy, but cameras are already built into most things. My other favorite is the personal budgeting app, I am interested in this one because it applies to my life, It would be software that I would use to help me save money.
I have changed two of my ideas upon re-evaluating the achievability of the project given my skill and the time limit I have
This week, I finally got my own laptop so I redownloaded all the necessary software and libraries for my gesture recognition system. Using the OpenCV library in Python, I was finally able to use the camera to detect the hand by separating the background with the foreground via thresholding. Then, I used findContours to identify the hand within a region of interest (roi). Finally, I made a copy of the the frame containing the roi and displayed the video in binary black and white. My code can now detect the hand with a static background.
This week I met with Ajit and discussed some of the necessary features for the user interface that the administrator will be using. This involved seeing a list of recently checked out items, adding new objects, and adding new users. I also met with Craig and discussed the back-end work. We decided to use Django and PostgreSQL.
In terms of implementation, I have created 4 different heuristics functions for my Capstone that take in a state and output a value based on how good the state is. with 4 being inadmissible. Technically 5 as 2 heuristics are additions to a single heuristic.
Modified my admissible heuristic function so it outputs a move (left=-1,right=+1,up=-4,down=+4) based on what move a search algorithm would have performed if it is was taking an action in that state.
Created my training agent function that outputs a file contains vectors of these heuristic outputs. One per state
Did some research into activation functions and Neural Network types to figure out what initial design I should go with for my Network.
This week I worked on developing a script that takes a directory and determines if the contents are all images of the same type, and if they aren’t converts them to jpgs. I also meet with my adviser and starting working on an outline that will be due next week.
This week dealt with the focus of the FFT and other conversion formulas. In recognizing that the Fast Fourier Transform is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT), you understand how that mainly relates to Fourier analysis and converting a signal from its original domain (often time and/or space) to a representation in the frequency domain and vice versa. With my current project, I am trying to understand how this relates between the signal processing and MIDI, and how that will allow me to transfer the signals into the shapes that I will be making in Jitter and MaxMSP. In understanding the multiple ways in which the transforms work I can use the signals that I get by decomposing a sequence of values into components of different frequencies, which will then generate multiple sounds, and that hopefully can be somewhat composed into a song or piece that is made through the patch. Currently SoundStroll 2.0, but it might end up being a whole different concept and Max patch when it’s done. In understanding what changes are being made, the new product could be thought of as a completely different three-dimensional audiovisual spatialization software. My main work has been with working to create a different audiovisual projection whilst working with MIDI and other signals to get different spatialization in MaxMSP, this was concurrently added to my paper, and it was also worked on in MaxMSP and Jitter.
Here is what I worked on for my project this week:
-Worked with poly tool to process two simultaneous notes.
-Got it working but temporarily disabled the circle of fifths map table-Now midi messages are sent to two paths for separate notes.
-Using xfade to combine matrices is not good, colors are still evenly distributed throughout the whole matrix instead of following freq mapping.
-Patch is getting crowded need to figure out in and out tool to split up and organize project.
-Talked with Xunfei about getting Max installed on some Earlham laptops
For this week, I have been mainly working on:
– Make a few changes on the coding environment for my project from last week for better efficiency.
– Follow Xunfei’s advice to consider some other music datasets. I eventually decided to use the Echo Nest Taste Profile Subset. This dataset is larger and more widely used than the LFM-1b dataset which I had been using.
– Design an efficient method to convert the original data into matrix form in order to use collaborative filtering.
Idea 1: Optical Character Recognition using Machine Learning
Optical character recognition is not a completely new field of research. However, with the advance of today’s technology, we can apply machine learning to optical character recognition and therefore can recognize not only the printed characters but the more difficult handwritten characters or characters from different type of documents such as certificate or receipt. With the help of OCR, a lot of human work can be reduce. I would like to research about that specific use of machine learning in OCR to handle those harder cases.
Idea 2: Disease prediction
People have always want to know their health status therefore they can prepare and take a better care of their body. It is not difficult to know a person’s current health status since he/she can just go to hospital to have a general medical check. However, people also want to know what type of disease they may have the future, this is when disease prediction using machine learning will be useful. I would like to research about this topic but the first things I found to be difficult will be how to get the appropriate the training data set.
Idea 3: Time management application
Since students sometimes need help with planning the classes and studying for exams, I want to create an application that at first can suggest the best time for students to go class or what time to study. After that, each term, the student can input how well did he/she perform on those classes/exams and from that in the next term/semester the application may suggest a more specific timetable for that specific students to have a better performance.
Topic Name: Earlham Tennis App
Developing an app to use it as a way of improvement for the tennis players. Entering data during matches to get score stats, errors and serve percentages. All this data will be stored in a database and could be accessed at any time.
Topic Name: Box Office App
Again, develop a mobile application to manage the Earlham Box Office. Users would be able to look at the event schedule, purchase tickets. To make it efficient, the student data will have to be obtained in order to find from the directory.
Topic Name: Sensor
If possible, make a sensor that would be attached to the tennis racquet determining how much strength is used and how many balls were hit in a span of time. This would be collected and according to the body analytics of the player, it can be determined how to improve the performance. The stats can then be seen on a web application.
Researched how image processing works (tutorials, examples using python and open-source software OpenCV). ~20 minutes on 1/19/19, ~30 minutes on 1/21/19
Made sure all necessary software was downloaded to my computer. ~30 minutes on 1/17/19
Adjusted timeline. ~15 minutes on 1/23/19
Found an advisor, and set up a meeting time to plan my project. Tested that the libraries I plan on using are available and functional on the cluster. Finally reviewed my project to make review the goals I’ve set myself are achievable and reasonable.
This is my first post for CS488.
- Met with Ajit to plan the next few weeks, and decided to start the data collection this week.
- Edited the timeline a little bit to include implementation of the project.
- I’ve shared my box folder with Dave and Charlie and will soon be contacting Andy Moore for access to volumetric data collected by the Geology students across campus.
This first week, I tried to focus
my project into something tangible. Initially, I was not sure about how
I wanted to go about editing and making SoundStroll 2.0, and I had
talks with advisors who told me that I tried to achieve too much in a
short period and that I also needed to find a focus. Interestingly
enough, I think that working with and creating a bandpass filter, in
addition to other improvements, can produce different effects in
SoundStroll 2.0, and I’m curious to see how that might change
SoundStroll 2.0, regarding effects and three-dimensional objects that
are spatialized with the signals.
In understanding what needs to be done for the project, I still have many main steps:
1. Completely rehaul and add a new more advanced spatialization toolset/visualizer to work with the updated software.
2. Add vocal recognition through a speech processing patch that’s made by myself to the updated Max project and get them to work together.
3. Add new additions that track the Fourier Transforms for analysis that allow the original and added vocal recognition through a speech processing patch.
4. Make sure that vocal, MIDI, and OpenGL components are working so that objects are spatialized through vocal component or the MIDI interface.
5. Potentially add virtual reality/VR component to the project to let you traverse the world.
In finality, I want SoundStroll 2.0 to be a logical successor to its former self. SoundStroll 2.0, or the new edition that I plan to create will use Fourier Transforms for vocal recognition through a speech processing application, which can also be analyzed for more profound findings audially or mathematically in relation to sound; a different, more advanced spatialization toolset to create a scene that you can traverse; and a more connected environment to create different ways to traverse with Max for Live, Ableton Live, and Reason, which will allow new sounds and objects to be triggered and spatialized through SoundStroll 2.0.
This week I looked over the specific dataset that I want to use for my project. I reached out to sources to obtain that dataset. Similarly, I read various researches and, checked existing experiments and projects to learn implementation of Convolutional Neural Network (CNN) in python using Tensorflow and Keras. Read documentations and went over tutorials to learn Tensorflow and OpenCv. Also, I setup Tensorflow in my local machine to work with OpenCv library.
Just sharing a helpful tutorial link below to understand about OpenCV
My first idea is for a application that allows the computer to be navigated with gesture control, initial thought is to use the camera that is on almost every laptop to map the mouse pointer between say the thumb and the forefinger, and when the thumb and forefinger touch emulate a the click of the mouse. Further interface could also be implemented such as a virtual keyboard or talk to text features, basically attempting to replace a mouse and keyboard, further research needed.
My second idea is either a stand alone software or a Photoshop add on for real time pixel art animation editors. Given a sequence of images with a specified distance apart, color pallet and speed at which to move through the images, one could make a change and the animation would update real time, also allowing the change of color pallets.
My third idea is a personal budget planning and expense tracking app. I person can track what they buy by inputting the cost of an item and categorize that item falls into (possibly further subcategories for more in depth statistics) ie $16.69 on groceries on 1/21/19, $32.55 on cloths on 1/22/19 etc. One can input there salary and how much they want to not spend and the app could keep track and suggest a budget for you, give statistics about your spending patterns etc.
The tangible work I have done this week on my project is finding the data sets and started setting up a work flow to automate the process. I created the connection between SQL and my simulation so that the simulation takes 2 arguments, a name of the database to store the timestamp data as well as the result data and the file path of a csv containing data. This sets up a database for the data with three tables, two of which contain results and one with the actual timestamp data. I have also started to take notes as I go so that I can refer to them when I start to write my paper.
I have gotten off to a relatively smooth start on my project. Xunfei is going to be my adviser and I have scheduled a meeting with her for tomorrow morning. I have prepared to present on my topic in class tomorrow. The majority of my time this week I spent starting my project implementation. I am still learning how to use Max/Jitter effectively but here are some of the things I have done:
-Map color values to MIDI notes.
-Map a saw waveform function over the x axis with scale relative to note frecuency
-Make the saw function scroll horizontally at a rate relative to frequency
-Crossfade the color and waveform matrices
With all of this together, my projects output visual output represents a single note at a time. Obviously it is a pretty rough version of what I want the eventually single note visualization to look like but the foundation of representing tone through color and pitch through wave frequency is functional and has no noticeable latency between note press and display.
This week I met with Ajit for an hour. We went over timeline and design of my project. I also met with Craig and ordered the RFID reader and tags after approving them.
This week I reviewed my project proposal and started setting up the coding environment. I also asked Xunfei to be my advisor for the project. During our meeting we discussed the project’s timeline for the first month and I made some changes in my timeline according to Xunfei’s suggestions. I’m also preparing for the project presentation in the next class.