- The spring break week dealt with mapping MIDI control into the acoustics of the three-dimensional space, which I have successfully done.
- I have also successfully added an FIR based buffer into the program as well to track analytics. You use buffir~ object when you need a finite impulse response (FIR) filter that convolves an input signal with samples from an input buffer. With that, you can look at an FIR filter response graph in MaxMSP which calculates the filter response of the FIR in the main sound patch for a given number of samples (coefficients)
Capstone Update 3/20/2019
Created and ran experiment. Noted the results down.
Finished the first draft of the Capstone paper. Currently revising and polishing it.
weekly update 3/20
This week I turned in a new draft of my paper and fixed a few bugs. I am now ready to collect most of the data I’ll need.
weekly update (3/16)
This week I tested out the reader with student interface, and checked if self-check in and check-out worked properly. I also met with Craig and discussed plan to migrate the application to the server and perhaps have it ready for the EPIC expo.
Weekly update (3/5)
This week I spent most of my time working on the paper, and finished the implementation and design sections. I also started working on the administrator interface and got a good portion of it done.
CS488 Update 7 (03/06/19)
This week I have dealt with working on improving my final paper and working on the MIDI mapping and audio of my project inside MaxMSP. I primarily worked on getting the audio format correctly to be able to loop and connect each file with the object associated in the three-dimensional space within MaxMSP. I secondly worked on the formatting of my paper; it needed to flow from topic to topic more also recognizing why it was relevant to the project as a whole.
March 6th update
This week I have been working on my paper both revising the submitted partial draft and adding the portions that were not done yet.
I have also done some work on my project itself. I have been doing a lot of refactoring. I made it so all the variations I am testing can be turned on and off via toggle buttons and all things that need to be initialized to run the patch are accessible in the main patch window instead of having to open the subpatches. I also revised portions of my project that had large repeated sections of code.
This week I
-implemented feedback from the first draft
-started writing the rest of the paper
-did some minor debugging and commenting
Capstone Progress 3/6/2019
Built a functioning, testable (not yet accurate) Neural Network that takes in my input of heuristic data and output a direction.
Since the puzzle states were strings, I wanted the output to be the value added to the index of the “_” to move to a new position but negative values cannot be used as targets or labels in neural network so instead I coded them using values 0-3 to represent each move.
I created a function that takes in the output of the neural network and converts it to a move for the puzzle.
Also know which specific layers I need to experiment with make my neural network be accurate.
Week 6 Update
- Found some more research papers to read which were related to the topic that I chose.
- Worked on the literature review.
Week 4 Update
In past week, I have spent time doing research to find interesting papers that I think will be related to my ideas. For each of my ideas there are at least five papers to take a look on at thee moment. However, for the last idea about a mobile application for time scheduling, I could only find some topics about effective time scheduling and mostly are just about general mobile application development. I have also read some paper for the first pass in order to briefly know what the authors were proposing in their papers and how well are those paper fit to my topics. I am also looking for available data sets for my first two ideas.
388 week 6
I have continued working on my annotated bibliography, but I have not finished it yet. I am finding it hard to focus on and be motivated for a project that I can not physically work on. I am still missing a couple sources for the annotated bibliography and I have been a bit lazy about looking for them. I think I will start by looking at other papers that I have found citation’s.
CS488 Update 6 (02/27/19)
This week mainly dealt with the writing of my final paper. My paper was mainly a mostly complete draft that worked for my initial goal and vision of SoundStroll 2.0. However, in its finality, it will be called SonoSpatial Walk and and it will be its completely own project. The added changes allow for generation of objects through MIDI, and hopefully other things will be allowed with the source code that was completed changed by myself as well. Hopefully, with the object generation, I will be allowed to work on the sound properties that play and loop through triggers. Also with the sound properties, I’m hoping I can resynthesize sound in real time using Fourier Transformations in order to complete change the sound as well.
Over this week I worked on finishing a first draft of my paper. Also I worked on debugging code and trying to install some packages.
Feb 27 update
This week I:
-worked on my paper draft
-worked on fixing bugs in my software
Read the CS papers help documents to understand how to read and understand the long papers.
Finding difficulties in searching for papers directly related to my ideas. Read a couple of papers according to the instructions which helped understand the material.
Met with Dave and Ajit to discuss the 3 ideas. The first 2 ideas were supported and I received some extra information about some features I can add to the project.
ACM membership is set- up and I am trying to refine the 3 ideas by looking up CS papers and discussing with fellow peers.
Weekly update (2/25)
This week I finished implementing a rough version of the student user interface. I spent a considerable time discussing the logic behind to student check out and check and what measures were necessarily to put in. I received feedback from Ajit on the design, and modified my approach based on that.
I have been a little bit behind on synthesizing my articles into the annotated bibliography, I mostly have just been trying to collect more and more articles. The gesture control Idea and the educational app idea seem a little difficult to complete in a semester of work so I think I will probably continue forward with the PCG for my video game, It is something I am definitely passionate about, I cant say the say for the others.
February 20th Update
-I created a large variation to my algorithm that spacializes the visualization based on the ratios just intonation ratio instead of being directly correlated to the frequencies. This involved implenting a new module that calculates the just intonation ratio and scales the sine wave visuals to that.
-I created tables to use in my paper draft and started turning my outline into my draft.
Weekly update (2/17)
CS488 Update 5 (02/20/19)
This week, while I was not able to work on the coding aspect to my project, I did receive a synthesizer from Professor Forrest Tobey, and have been working extensively in MaxMSP to get it to react to my program.
Also, I have been working on my paper to get it ready for the 02/25/19 deadline.
I’m still trying to get the data over. I’ve been working on it for like 2 to 5 hours everyday but errors keep arriving. I’ve also researched about algorithms and have a better idea of my project scope.
I tested that the last of my dependencies is on the cluster and functional. Started implementing the second major piece of my approach, SURF. Taking a little longer than expected but not a major delay.
Changed a piece of my pipeline and created some issues. Working on fixing those, and hopefully improving the overall functionality in the process.
Capstone Progress 2/19/2019
Managed to have Keras, an open source network library, installed in Jupyter.
Currently focusing on building a sample neural network, adjusting the data into a format that can be used by the network, iterating on the architecture required for the Neural Network and writing a draft of the first few sections of the Capstone paper.
Weekly update (2/17)
This week I discussed the structure of my paper with Ajit, and received feedback on how to explain the design and implementation sections. I also started working on implementing the student user interface.
Weekly Progress CS388 (2/15)
For this week, I have been looking for research papers on the three topics that I have chose, specifically for the Text Categorization and the Air Quality Monitoring system. Two things I found out:
– Air Quality Monitoring system:There have been many research on a low-price air quality system, some of which use the same technology as I do. This gives me two advantages: First is that I can reference and study those paper for my research. Secondly, I can compare the results I made on my model with that of other researchers so that my model monitoring system would be the same as the others (which would indicate that my model is working correctly).
– Text Categorization: It has been a research and application topic for over a decade. Many researchers have made considerable amount of progress on this topic, which provides me with more insight into the way to approach, which model should I use, etc… Furthermore, I have not found any documents about the application of Text Categorization on Social Media. Therefore my application can be a newly added topic. One thing that I need to worry about is how I am able to get the post and analyze them. Also, I need a pre-classified dataset to train my model in, which I haven’t found. Other than that, I think this can be a great topic to look into.
CS488 Update 4 (02/13/19)
This week, I was able to realize partial spatialzation in MaxMSP using the new tool that I had found previously, HOA Library. Right now I am trying to get my object driver to work to be able to create multiple figures. Also, Forrest wanted me to map the knobs of a MIDI keyboard to Max and the patch.
I have started collecting papers on my three ideas.
There are many different sources that have used gesture controlled navigation in the past so I think it will be hard to differentiate from them, but most of them seem to have to use another device. There are a couple that are just controlled by a camera, this would be a neat way to solve this problem, and wouldn’t require people to buy anything.
For my second Idea of PCG for my video game, I have gathered sources where people use pcg to remake games or even make new ones. None of them seem super similar to what I would like to implement. I have also found a few good overviews and lit reviews of the subject. These will come in handy.
The papers I found for my last idea, of an educational app for ultimate frisbee, mostly just include the theorys and methods for educational applications. these include apps such as duo lingo and other language learning sources. I figure a lot of this could be mapped over to any educational application. It seems like a lot of the problems in this field deal with retention.
I have been trying to get the sensor to give me significant results and have had a lot of trouble with that. I mostly did research and experimentation to try and get it to work.
2/13 update of the week
This week I finished my implementation of the SIFT algorithm, starting to work on implementing SURF next.
Also looking ways to organize images to improve accuracy. Currently planning to get keypoints, then organize photos based on similar keypoints rather than locality.
Feb 13th update
This week I did the following tasks:
-Met with Dave about the structure of my outline and my experimental results section
-Met with Forrest to get feedback on my work.
-Began to work on a new variation of my algorithm based on Forrest’s feedback
-Read chapter 5 in writing for computer science
Week 3 Update
I have talked with Ajit about all three ideas and received some feedbacks and suggestion. Also, I have started finding research papers that are related to my ideas to read for next week.
Weekly Update (Feb 10)
This week I finalized the schema for the database with Ajit, and familiarized myself with the PostgreSQL commands after receiving the log in information from Craig. I faced an unexpected challenge with the ordering of the RFID device from Ebay. Instead I researched for two days and found a few other cost friendly options in the US, and have proposed to the department to purchase one of them.
I am planning to finish all the software end of the project by the time I get my hands on the device!
388-Week 3 updates
I decided to pivot on my ideas a little bit because 2 of them were already solved and I didn’t feel that I had anything to add to the field. I have found papers and starting to read them on my new ideas, ideas listed below.
Idea 1 : Gesture Controlled Mouse and Keyboard
Description: The Idea is to fully replace the keyboard and mouse of a computer by using a 2D camera to track hand motions. I really like the idea of incorporating the feature of swipe to text as seen on some mobile phones, this could possibly increase typing speed.
Idea 2 : Procedural Level generation for 2D platform game
Description: Last semester I built a Game in Unity with 2 friends, we have continued to work on our game with the intention that one day we sell it. A large part of this process will be the generation of new levels. One idea is to develop an algorithm that procedurally generates the levels. While procedurally generated games are fairly common, procedurally generated platformers are far less common. There are a few challenges involved that I think would make this interesting.
Idea 3: Educational app for learning the rules of Ultimate Frisbee
Description: My other new idea, is to create an app that is maybe similar to other educational apps such as Duo-lingo, but with the intention of teaching the rules of ultimate frisbee. Since it is Self-refereed sport it is much more important that the players know the rules themselves. However these rules in the rule book are very wordy and hard to follow sometimes, this would provide an easily accessible way to learn the rules without having to sit down and read a book.
Weekly update 2/6
This week I made a first draft of an outline for the paper. Also worked on implementing the first of three algorithms I plan to use in my research, Scale Invariant Feature Transformation. While not completely finished I have most of the framework complete, and am just about where on schedule.
CS488 Update 3 (02/06/19)
This week offered me a chance to get into detail with spatialization libraries:
- Jamoma – C++ and Max/MSP general purpose audio and spatialization toolset with standard interfaces, requires custom installation depending on the version of Max though. This library is needed to use most of Tom’s Max patches.
- NASA Slab – (Older) open source project for testing spatial auditory displays, requires registration to NASA open source archive.
- CICM Higher order ambisonics library – SuperCollider(under development), CSound, Pd, Faust, oFx, Max/MSP and C++ archive of useful ambisonics stuff, renders down to binaural well but computationally quite intensive. This library is required to use most of Tom’s Max patches.
- Sound Field Synthesis Toolbox for MATLAB – Python version exists as well btw. Sound Field Synthesis Toolbox (SFS) for Matlab/Octave gives you the possibility to play around with sound field synthesis methods like Wave Field Synthesis (WFS), or near-field compensated Higher Order Ambisonics (NFC-HOA). There are functions to simulate monochromatic sound fields for different secondary source (loudspeaker) setups, time snapshots of full band impulses emitted by the secondary source distributions, or even generate Binaural Room Scanning (BRS) stimuli sets to simulate WFS with the SoundScape Renderer (SSR).
- MIAP – Max/MSP objects, not spatial audio per se but pretty cool. More spatial parameter space exploration, though the binaural example is in the pack.
- Octogris – OSX DAW 8 channel spatialization plugin
- Spatium – Plugin (AU), Max/MSP, Standalones (OSX). Modular open source software tools for sound spatialization: renderers, interfaces, plugins, objects. Got some nice processing based physic interactions for spatial control.
- ambiX – Ambisonics spatialization Plugin compatible with Reaper, Ardour, MaxMSP, Bidule or as standalone applications with Jack.
- HOA – Higher Order Ambisonics (HOA) resources for Pure Data and Max from Paris Nord university.
- ATK – Ambisonic Toolkit for Reaper and SuperCollider.
- Sonic Architecture – resources for ambisonics in Csound and the blue environment from Jan Jacob Hofmann.
- Iannix- a graphical open source sequencer for digital art. It requires sound making software or hardware connected to the sequencer. The sequencer sends instructions (e.g.OSC) and allows to create 2D and 3D scores programming the behavior of sliders and triggers.
- Zirkonium – tool from zkm to spatialize music.
- NPM – web audio classes for Ambisonic processing FOA and HOA.
- [omnitone] (https://github.com/GoogleChrome/omnitone) – spatial audio on the web – by Google.
Adapted from: https://github.com/darkjazz/qm-spatial-audio/wiki/Open-source-free-spatialisation-tools-around-the-web
What I decided to go for was was HOA Library, because HoaLibrary is a collection of C++ and FAUST classes and objects for Max, PureData and VST destined to high order ambisonics sound reproduction. It won “Le Prix du Jeune chercheur,” awarded by the AFIM in 2013. This library is free, open-source and made available by CICM, the research center of music and computer science of the Paris 8 University. WIth using that, I know that I can make a lot of edits to it, and many people have used it in a concert or installation setting. Now that it’s decided, I can work towards the connection between MIDI and the spatialization of objects.
Here are my ideas for topics! Another post with some cost analyses is coming soon!
Topic Name: Real-time management using Augmented Reality
Topic Description: Examine the applications of Head-Mounted augmented reality displays such as HoloLens or Project Northstar in real-time management scenarios like Theatre Stage Management or NASA rocket launch management and implement a basic proof of concept software to eventually be used in the Theatre department as part of my Theatre Capstone.
Topic Name: Using real-time spacial mapping to improve calling for stage managers during performances
Topic Description: Examine the feasibility of using technologies such as Kinect 2.0 in the area of theatre to allow stage managers to keep better track of the positions etc of their actors, allowing them to make more accurate cue calls when their vision of the actors might be obscured, and implement a proof-of-concept application for use in my Theatre Capstone.
Topic Name: Using micro controllers to facilitate cross-device communication and control
Topic Description: Use Arduinos or Raspberry Pis to allow two or more very different devices to be controlled by another device. For example, allow Qlab on a computer and cues on a light board to be controlled from a single application running on a different computer. If feasible, make a proof-of-concept for use in my Theatre Capstone.
Feb 6 update
This week I accomplished several things:
-Reconfigured the way my patch combines matrices to avoid issues I was having w crossfade
-implenented horizontal movement after note press
-Organized parts of my project into sub-patches and cleaned up some stuff
-Implemented envelopes connected to sound and video out
-Created an outline for my paper
This week I talked to Dave about the delay on my database. I am waiting to hear back from Charlie andor Craig so I can start working on building the database and connecting the dots. Until then I have read up on some more algorithms, specifically Reddits and ELO’s. I am also writing up my outline for CS488 and cleaning up some paragraphs from last years paper.
Capstone Progress #3 2/5/2019
Built a prototype testing agent for the Capstone that should in theory take in a file of problem states and go through them with each Heuristic(Neural Network not yet ready) then output the solution size, number of nodes visited and number of nodes that are waiting for a visit. Haven’t gotten to properly test this one so will need to make time for that.
Tried writing a high level outline of my Capstone.
388 Week 2
Further deliberated on my 3 ideas, I really enjoy the idea about replacing the keyboard and mouse set up with a gesture recognition system from a 2D camera, most gesture recog systems require another device that one must buy, but cameras are already built into most things. My other favorite is the personal budgeting app, I am interested in this one because it applies to my life, It would be software that I would use to help me save money.
I have changed two of my ideas upon re-evaluating the achievability of the project given my skill and the time limit I have
This week, I finally got my own laptop so I redownloaded all the necessary software and libraries for my gesture recognition system. Using the OpenCV library in Python, I was finally able to use the camera to detect the hand by separating the background with the foreground via thresholding. Then, I used findContours to identify the hand within a region of interest (roi). Finally, I made a copy of the the frame containing the roi and displayed the video in binary black and white. My code can now detect the hand with a static background.
Weekly Update (Jan 30)
This week I met with Ajit and discussed some of the necessary features for the user interface that the administrator will be using. This involved seeing a list of recently checked out items, adding new objects, and adding new users. I also met with Craig and discussed the back-end work. We decided to use Django and PostgreSQL.
Capstone Progress#2 1/30/2019
In terms of implementation, I have created 4 different heuristics functions for my Capstone that take in a state and output a value based on how good the state is. with 4 being inadmissible. Technically 5 as 2 heuristics are additions to a single heuristic.
Modified my admissible heuristic function so it outputs a move (left=-1,right=+1,up=-4,down=+4) based on what move a search algorithm would have performed if it is was taking an action in that state.
Created my training agent function that outputs a file contains vectors of these heuristic outputs. One per state
Did some research into activation functions and Neural Network types to figure out what initial design I should go with for my Network.
January 30th Update
This week I worked on developing a script that takes a directory and determines if the contents are all images of the same type, and if they aren’t converts them to jpgs. I also meet with my adviser and starting working on an outline that will be due next week.
CS488 Update 2 (01/30/19)
This week dealt with the focus of the FFT and other conversion formulas. In recognizing that the Fast Fourier Transform is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT), you understand how that mainly relates to Fourier analysis and converting a signal from its original domain (often time and/or space) to a representation in the frequency domain and vice versa. With my current project, I am trying to understand how this relates between the signal processing and MIDI, and how that will allow me to transfer the signals into the shapes that I will be making in Jitter and MaxMSP. In understanding the multiple ways in which the transforms work I can use the signals that I get by decomposing a sequence of values into components of different frequencies, which will then generate multiple sounds, and that hopefully can be somewhat composed into a song or piece that is made through the patch. Currently SoundStroll 2.0, but it might end up being a whole different concept and Max patch when it’s done. In understanding what changes are being made, the new product could be thought of as a completely different three-dimensional audiovisual spatialization software. My main work has been with working to create a different audiovisual projection whilst working with MIDI and other signals to get different spatialization in MaxMSP, this was concurrently added to my paper, and it was also worked on in MaxMSP and Jitter.
January 30 Update
Here is what I worked on for my project this week:
-Worked with poly tool to process two simultaneous notes.
-Got it working but temporarily disabled the circle of fifths map table-Now midi messages are sent to two paths for separate notes.
-Using xfade to combine matrices is not good, colors are still evenly distributed throughout the whole matrix instead of following freq mapping.
-Patch is getting crowded need to figure out in and out tool to split up and organize project.
-Talked with Xunfei about getting Max installed on some Earlham laptops
Week 1 Ideas
Idea 1: Optical Character Recognition using Machine Learning
Optical character recognition is not a completely new field of research. However, with the advance of today’s technology, we can apply machine learning to optical character recognition and therefore can recognize not only the printed characters but the more difficult handwritten characters or characters from different type of documents such as certificate or receipt. With the help of OCR, a lot of human work can be reduce. I would like to research about that specific use of machine learning in OCR to handle those harder cases.
Idea 2: Disease prediction
People have always want to know their health status therefore they can prepare and take a better care of their body. It is not difficult to know a person’s current health status since he/she can just go to hospital to have a general medical check. However, people also want to know what type of disease they may have the future, this is when disease prediction using machine learning will be useful. I would like to research about this topic but the first things I found to be difficult will be how to get the appropriate the training data set.
Idea 3: Time management application
Since students sometimes need help with planning the classes and studying for exams, I want to create an application that at first can suggest the best time for students to go class or what time to study. After that, each term, the student can input how well did he/she perform on those classes/exams and from that in the next term/semester the application may suggest a more specific timetable for that specific students to have a better performance.
Week 1 Ideas
Topic Name: Earlham Tennis App
Developing an app to use it as a way of improvement for the tennis players. Entering data during matches to get score stats, errors and serve percentages. All this data will be stored in a database and could be accessed at any time.
Topic Name: Box Office App
Again, develop a mobile application to manage the Earlham Box Office. Users would be able to look at the event schedule, purchase tickets. To make it efficient, the student data will have to be obtained in order to find from the directory.
Topic Name: Sensor
If possible, make a sensor that would be attached to the tennis racquet determining how much strength is used and how many balls were hit in a span of time. This would be collected and according to the body analytics of the player, it can be determined how to improve the performance. The stats can then be seen on a web application.
CS488 Week1 Update
Researched how image processing works (tutorials, examples using python and open-source software OpenCV). ~20 minutes on 1/19/19, ~30 minutes on 1/21/19
Made sure all necessary software was downloaded to my computer. ~30 minutes on 1/17/19
Adjusted timeline. ~15 minutes on 1/23/19
CS 488 First Post
Found an advisor, and set up a meeting time to plan my project. Tested that the libraries I plan on using are available and functional on the cluster. Finally reviewed my project to make review the goals I’ve set myself are achievable and reasonable.
CS488 – Welcome Post
This is my first post for CS488.
- Met with Ajit to plan the next few weeks, and decided to start the data collection this week.
- Edited the timeline a little bit to include implementation of the project.
- I’ve shared my box folder with Dave and Charlie and will soon be contacting Andy Moore for access to volumetric data collected by the Geology students across campus.
CS488 Update 1 (01/23/19)
This first week, I tried to focus
my project into something tangible. Initially, I was not sure about how
I wanted to go about editing and making SoundStroll 2.0, and I had
talks with advisors who told me that I tried to achieve too much in a
short period and that I also needed to find a focus. Interestingly
enough, I think that working with and creating a bandpass filter, in
addition to other improvements, can produce different effects in
SoundStroll 2.0, and I’m curious to see how that might change
SoundStroll 2.0, regarding effects and three-dimensional objects that
are spatialized with the signals.
In understanding what needs to be done for the project, I still have many main steps:
1. Completely rehaul and add a new more advanced spatialization toolset/visualizer to work with the updated software.
2. Add vocal recognition through a speech processing patch that’s made by myself to the updated Max project and get them to work together.
3. Add new additions that track the Fourier Transforms for analysis that allow the original and added vocal recognition through a speech processing patch.
4. Make sure that vocal, MIDI, and OpenGL components are working so that objects are spatialized through vocal component or the MIDI interface.
5. Potentially add virtual reality/VR component to the project to let you traverse the world.
In finality, I want SoundStroll 2.0 to be a logical successor to its former self. SoundStroll 2.0, or the new edition that I plan to create will use Fourier Transforms for vocal recognition through a speech processing application, which can also be analyzed for more profound findings audially or mathematically in relation to sound; a different, more advanced spatialization toolset to create a scene that you can traverse; and a more connected environment to create different ways to traverse with Max for Live, Ableton Live, and Reason, which will allow new sounds and objects to be triggered and spatialized through SoundStroll 2.0.
Week 1 – Update
This week I looked over the specific dataset that I want to use for my project. I reached out to sources to obtain that dataset. Similarly, I read various researches and, checked existing experiments and projects to learn implementation of Convolutional Neural Network (CNN) in python using Tensorflow and Keras. Read documentations and went over tutorials to learn Tensorflow and OpenCv. Also, I setup Tensorflow in my local machine to work with OpenCv library.
Just sharing a helpful tutorial link below to understand about OpenCV
388 week 1 – 3 ideas
My first idea is for a application that allows the computer to be navigated with gesture control, initial thought is to use the camera that is on almost every laptop to map the mouse pointer between say the thumb and the forefinger, and when the thumb and forefinger touch emulate a the click of the mouse. Further interface could also be implemented such as a virtual keyboard or talk to text features, basically attempting to replace a mouse and keyboard, further research needed.
My second idea is either a stand alone software or a Photoshop add on for real time pixel art animation editors. Given a sequence of images with a specified distance apart, color pallet and speed at which to move through the images, one could make a change and the animation would update real time, also allowing the change of color pallets.
My third idea is a personal budget planning and expense tracking app. I person can track what they buy by inputting the cost of an item and categorize that item falls into (possibly further subcategories for more in depth statistics) ie $16.69 on groceries on 1/21/19, $32.55 on cloths on 1/22/19 etc. One can input there salary and how much they want to not spend and the app could keep track and suggest a budget for you, give statistics about your spending patterns etc.
Week 1 update 488
The tangible work I have done this week on my project is finding the data sets and started setting up a work flow to automate the process. I created the connection between SQL and my simulation so that the simulation takes 2 arguments, a name of the database to store the timestamp data as well as the result data and the file path of a csv containing data. This sets up a database for the data with three tables, two of which contain results and one with the actual timestamp data. I have also started to take notes as I go so that I can refer to them when I start to write my paper.
January 23 Update
I have gotten off to a relatively smooth start on my project. Xunfei is going to be my adviser and I have scheduled a meeting with her for tomorrow morning. I have prepared to present on my topic in class tomorrow. The majority of my time this week I spent starting my project implementation. I am still learning how to use Max/Jitter effectively but here are some of the things I have done:
-Map color values to MIDI notes.
-Map a saw waveform function over the x axis with scale relative to note frecuency
-Make the saw function scroll horizontally at a rate relative to frequency
-Crossfade the color and waveform matrices
With all of this together, my projects output visual output represents a single note at a time. Obviously it is a pretty rough version of what I want the eventually single note visualization to look like but the foundation of representing tone through color and pitch through wave frequency is functional and has no noticeable latency between note press and display.
First Post- Ali
This week I met with Ajit for an hour. We went over timeline and design of my project. I also met with Craig and ordered the RFID reader and tags after approving them.
22/1/2019: Capstone Progress#1
First post of Capstone progress. First thing I did was review my Capstone proposal paper to re familiarize myself. Immediately I begun work on the work I outlined for Week 1 in my timeline.
I built a generator for random initial board states for an 11 sliding tile-puzzle. I chose to have each board state be a solved state that was then subjected to a random number of moves. This is ensure that each state is solvable without having to check first. Another choice I made was to have each state be represented as a string with each tile being represented by a hexadecimal number and position represented by its index in string.The blank space is represented by “_”. This was done because a number of states will be generated during the AI’s decision making process so the states should be as small and compact as possible.
I also started to look into the tools for creating the neural network. So far I am looking at Tensorflow, Keras in combination with Python.
Final Paper – crawl-o-matic-o-matic
Poster – ebramth15_poster
Gitlab with code – https://gitlab.cluster.earlham.edu/duckroller/crawl-o-matic-o-matic
Final – Rei
Final deliveries for CS488
Github directory: https://github.com/mashres15/FakeNewsCapstone
Finished working on the final paper.
This week, I attempted to collect preliminary data, which made me realize that I would need an air compressor in order to obtain meaningful data. I added this to my proposal and made some modifications to improve it overall.
This week after receiving feedback from Xunfei on the second draft of my paper, I updated it with respect to her comments. I also followed up with Craig; he is going to purchase the equipment early next semester.
Weekly Update Dec 5
Got feedback for my second draft of my proposal. Ready to make these final changes and submit on time! Started installing the dependencies so that I can get familiar with the tools I’ll be using over the break. Not planning to commit too much effort into this in the midst of finals, but will redouble my efforts after the tests are over.
Weekly update Dec 3rd
This week I received feedback on the second draft of my proposal from Xunfei. I spent time updating the proposal with respect to her feedback. I also met with Craig and decided on a set of RFID reader and tags. Craig is going to purchase them before the end of the semester.
CS-488 Nov 28 Update
This and the previous week, I spend my time trying to beat my way through a variety of problems related to one specific module of my project’s code. After talking over it with my adviser I have shifted both that piece of code and what should be taking my priority. The original model of my project was that, after the blockchain authenticated a ‘website’ so that it could access a user’s ‘cookie’ file, the functions would be passed over to the user from the website’s server to the user’s. My adviser suggested I simply change this so that the user has both the file and the functions, and that the website simply sends a signal if the blockchain transaction was approved. The actual process of the functions on the cookie file are not as important as the blockchain authentications, which is the center of the project.
The refocusing suggestion was also on everything related to my experiment, such as the details of my experimentation, and the discussion of further research. Leave it so that I have as much done as possible, with that window left for my experimental results.
This week, I was focused on making the poster and including the final results of my research.
Weekly Update (Nov 23)
This week I haven’t been able to work much because of some traveling. However, I was able to think more about my design decision including the RFID reader that I will buying. I have selected a device, and discussed with Xunfei. I am planning to meet with Craig and possibly purchase the equipment before the end of the semester.
Weekly update (Nov 17)
This week I received feedback from Xunfei on my initial draft of proposal. I spent the last few days thinking more about design and trying to answer some questions that Xunfei raised. I tried referring back to my sources and it has been very helpful.
Submitted the CS second draft on Wednesday and waiting for Xunfei’s Feedback. The demo presentation went well without much questions from the audience. On working on making the poster for the presentation on Dec 5.
I met with my mentors 3 times to discuss the proposal’s feedback and what I can do to improve it. I also reached out to some folks whose previous work could help me. I decided on my end product as a result of that. I also decided to change a few things in my project based on the feedback I got. I will start with a simpler artist style to replicate and if that goes well, I will move to the more complex one. I will also most likely change my sensor from an “air quality” to a CO sensor because that will allow me to see more variation on campus, but I will have to obtain preliminary data before I can make a final call.
November 14th Update
This week I did a lot of research into how jitter works and how it communicates with max and msp objects. In the process I found a few more helpful sources to replace some of the less relevant related works that I have. The past couple of days I did some programming on a max patch to familiarize myself with the basics of converting an audio signal into jitter output. I still have a lot to figure out but I was able to create a visual representation of a basic sine wave. I learned about audio sample rates and how to buffer an audio stream into a matrix.
Week before Thanksgiving update
A much slower week than normal, I was swamped with a number of other pre break assignments so I wasn’t able to put much work into reading or revising this week. But I was able to work on revising my introduction and related works section. The focus for the coming week will be to make any final revisions in my paper, and see if there are one or two more papers worth adding in. Found a very good paper on field work that helps establish the background for my work.
Weekly Update 12 – Fall break is around the corner!
- Met with Ajit to talk about the follow up steps on Monday.
- Based on the feedback, we decided to focus efforts on the following:
- Updating the design framework/diagram,
- Writing and explaining the design of the project,
- Read other published papers to get an idea of the structure of the paper,
- Add transition paragraphs in the paper.
Weekly update – Rei
During the past week, my focus has been on:
- Preparing for the demo.
- Working on the second draft of the paper.
- Manually labeling the corpus.
- Working on getting the overlapping tests working.
- Outlining the poster.
I have been reading up on documentation and research for the Leap Motion controller so that writing the code will be more easily managed in the future. I have also been updating the commented portions of my proposal, and I’m also changing certain sections so that it will make more sense with the context of the scope of my project.
This week I worked more on my proposal after I received constructive feedback from Xunfei. I explored some of the papers further in depth to learn about how to design my project. I am also researching on what tools to buy.
Worked on the demo presentation. Experimented with 2 datasets each taking 4 hours of run time. One observation I found is that changing the labels of the fake news changes the accuracy. It was found that detecting reliable news rather than fake news was statistically better in performance.
This week I made some minor changes to the python script so now the generated file is in the right format. I also made some major changes to my paper and have an updated draft of it. Next week I’ll be able to get together a draft of my poster and then I should be pretty much set to start extending this work in my psychology capstone next semester.
Weekly update – Rei
During the last week, I mostly continued working on getting the exhaustive parameter search working for the classifiers. At first, I was having a few errors but in the end, I was able to get it working and get results.
Next, I worked with the Open American National Corpus. I extracted all the text files in one folder. I was able to convert those text files into a csv where each sentence is contained in one row. After that, I run a script which created two csv files: one containing sentences which can potentially contain analogies and one that potentially don’t. I have started labeling them manually.
I have also started preparing for the demo and the second draft of the paper.
November 7 Update
My proposal adviser will be Dave. We met last friday to talk about my project idea. one note he gave me was to be clear what parameters I will use to measure my success since my project is almost completely software designing as opposed to research. Over the next few weeks I need to revise my proposal and prototype some basic features of my audiovisual synth.
Weekly Update 11 – Happy Diwali!
- Did second pass for more papers for this week, focusing on the design, and processes used,
- Met with Ajit on Monday to discuss the project and plan rest of the semester,
- Took Quiz 5 for Project update.
My adviser for my project for the remainder of the semester is Charlie Peck. We met on Tuesday to discuss my weekly plan. Over the next 4 weeks I will be building the database from the source code, which includes installation of many things. I will also be doing statistical analyses over some datasets that I have yet to find in order to understand the research/ database user side of my project. I have a clear plan of my tasks that lay ahead.
Another Week Another Update
Found an advisor, Ajit, and scheduled a meeting. Otherwise did a second look at a couple papers I had only done a first pass on. Otherwise waiting on feedback on my first draft and spent a little time looking at some potential test data if I find myself in need of other sources.
This week, I met with Craig, and discussed what kind of tools I need to purchase for the project, such as RFID reader and Tags. There are many options however, I need to make sure to purchase a reader that allows backend manipulation. I am scheduling a meeting with Ajit, to discuss this further with him.
- I met with Seth Hopper concerning light pollution on campus and different ways to measure that, so I have some ideas to test.
- I was introduced to some of the fire department’s devices to measure air quality, which will help me improve my design to get better data.
This week I re-structured things again. I decided to look into other methods of generating a visualization and decided to separate that process out from NVDA. Under this design, the developer runs NVDA with my plugin installed and that generates a text file. Then the developer navigates to my web-based visualization tool, uploads the file, and then views the generated visualization. I have a working demo of the visualization tool now, but I’m still working on ironing out some of the issues in generating the text files (specifically coming up with a method for splitting the chunks pulled out appropriately).
The front end development part for my project is almost complete. The database setup and connection is completed. The integration of backend machine learning model with flask has worked. The flask prediction model has also been linked with the frontend. There are some issues with websites that are not blog and I am fixing the issue. Next step is to make a retrainable model.
I read more papers where the researchers used a Leap Motion controller to recognize gestures. People use a variety of methods to classify static and dynamic gestures. One of the more frequently used methods takes advantage of a ‘Hidden Markov Model”.
Additionally, I did research on the software available for the Leap Motion controller. By the end of this week I will finalize the framework of my project.
October 31 update
This week I accomplished these tasks:
-found the rest of my 15 sources for my paper
-Met with Xunfei to discuss my design outline
-wrote my project proposal
-Did a second pass of a few of my papers
CS-488 Update 8
Worked on my final paper some more, as well as familiarizing myself with my two-node blockchain set-up. I’m still working out how to integrate my other modules into the system, particularly how to pass a function from one server to another through the chain, but I am making progress and anticipate that I will be finished with that tomorrow at the latest. I’ve taken a glance at how my poster will be set-up, but it’s been a tertiary concern for the past few days.
This week I just worked on writing the first draft of my proposal. Found several additional papers to bring myself up to the requirement and fleshed out my idea for what I want to do in a bit more detail. The new papers provided insight on several other computer vision algorithms that I might be able to use, time allowing. Got notes and incorporated those changes into my draft. Built a diagram for my proposal.
Weekly Update 10 – Happy Halloween
- Did second pass for three papers for this week,
- Worked on First draft for proposal.