Senior Capstone – Finished

with No Comments

Final Paper

A final version of the paper can be found here.

Final Repository

A final version of the repository can be found here.

Changes

Demographic weighting was added, as was a file that turns Rhode Island into 3 districts. Unfortunately, Pennsylvania and Kentucky data was never added due to time constraints, and such has been noted. The paper has been updated to contain results and future work recommendations.

Senior Capstone, Looking Back

with No Comments

Abstract

An ever-present problem in the world of politics and governance in the United States is that of unfairly political congressional redistricting, often referred to as gerrymandering. One method for removing gerrymandering that has been proposed is that of using software to create nonpartisan, unbiased congressional district maps, and there have been some researchers who have done work along these very same lines. This project seeks to be a tool with which one can create congressional maps while adjusting the weights of various factors that it takes into account, and further evaluate these maps using the Monte Carlo method to simulate thousands of elections to see how ‘fair’ the maps are.

Software Architecture Diagram

As shown in the figure above, this software will create a congressional district map based off of pre-existing datasets (census and voting history) as well as user-defined factor weighting, which then goes under a Monte Carlo method of simulating thousands of elections in order to evaluate the fairness of this new map. The census data is used both for the user-defined factor weighting and for determining the likelihood to vote for either party (Republican or Democrat), which includes race/ethnicity, income, age, gender, geographical location (urban, suburban, or rural), and educational attainment. The voting history is based on a precinct-by-precinct voting history in Congressional races, and has a heavy weight on the election simulation.

Research Paper

The current version of my research paper can be found here.

Software Demonstration Video

A demonstration of my software can be found here.

CS 388 – Week 13 – Updates

with No Comments

This week, I continued working on my project proposal, submitting my second draft after some much-needed updates. I still need to work further on the Related Works section. I additionally continued working on early implementation of the project. Lastly, I prepared a first draft of my presentation slides.

CS388 – Week 12 – Update

with No Comments

This previous week, the work I’ve done has been two-pronged, as has become the norm and will continue to be for the rest of this semester. First, I continued work on the basic implementation of the game. I currently have the control module working, as well as a looping stage that I created in order to test the controls. On the proposal side, I’ve been making edits based on the in-class peer review that we did, as well as working more recently based on the feedback given by Xunfei. I also met with Xunfei to go over her feedback of my first draft, and updated her on my progress.

CS388 – Week 11 – Updates

with No Comments

This past week, I’ve finalized the basic design for the game I will be implementing. It will be a horizontal auto-runner, where the player ducks/jumps to avoid obstacles to the beat of the music in order to keep playing. I continued familiarizing myself with Unity2D, and plan on starting work on the game this upcoming week. Additionally, I wrote up the first draft of my project proposal.

CS388 – Week 10 – Updates

with No Comments

This past week, my work has been split in two directions: First, I’ve been refamiliarizing myself with Unity, by means of going through my Game Design second project. Further than that, I’ve been familiarizing myself with Unity2D for the first time, which I plan on using for the senior project due to the simplicity as compared to Unity3D. Besides getting used to the main software engine I will be using, I also continued reflection on my proposal outline; I’ve been looking more into different PCG-G algorithms and have decided on using the chunk paradigm as my second stage generation algorithm. Its stages won’t be as directly aligned to the music, but it should improve efficiency.

CS 388 – Week 9 – Updates

with No Comments

This week, I worked more on my proposal outline. Tuesday morning, I met with somebody from EPIC to go over my grant application to go to GDC, at which I may try getting some playtesting data from my project from professionals. This morning, I met with Xunfei to look at my proposal outline before revising and finalizing it.

CS 388 – Week 8 – Updates

with No Comments

During this week, I continued work on my literature review after meeting with Xunfei. After finishing the review, I also started work on my proposal outline, and continued looking for more resources to use in my project. I specifically need to find more procedural generation source code for game stages, I’m fairly happy with my two music generation methods.

CS388 – Week 7 – Update

with No Comments

This week, I wrapped up my first draft of the literature review. I’ll be meeting with the writing center as I embark on my final draft. I’ve continued working on nailing down my exact idea that I’ll be proposing, as well as looking into the available resources found throughout the papers I’ve read, from algorithms to source codes. I’ve done some basic work on a prototype game, but have been too busy to make much progress yet. I think a good portion of my project may include comparisons between different methods and combinations of methods between the PCG-G (different algorithms, mostly) and music generation (mostly grammar-based versus machine learning).

CS 388 – Week 6 – Updates

with No Comments

I’ve finalized the base project idea – I’m going with the one involving music generating AI. Additionally, I’ve officially gotten a proposal adviser, Xunfei, who I’ll be meeting with every Wednesday morning. I’m having some issues in limiting the scope and application of my project, which I’ll be focusing on while I finish up my literature review. In terms of the review, I’ve read 9 of the 10 papers, so I only need to read the last one and put the notes I have into the literature review format.

CSS388 – Week 5 – Sources

with No Comments

In the past two weeks, I’ve read through a total of 12 sources, four for each idea, and created annotated bibliographies based on them. While some of these papers are more useful than others, each has been helpful in one way or another – and, indeed, I’ve been able to find more papers to look into further as I modify my ideas based off of both peer feedback and my own research. Some papers have given me actual algorithms to either implement or look at as a building block for my own- others have shown me, for example, how much one of my ideas (copyright detection) is already extensively researched, and pointed me towards new directions in which I could modify the project to make it something original.

CS388 – Week 4 – Update

with No Comments

This week, I created annotated bibliographies for papers pertaining to each of my three ideas. I also further considered the possibilities for updating my project ideas – I’m likely to not go with the copyright detection idea. If I go with the visualization of music idea, I’m likely to turn it on its head and deal with the audiation of visual art. I’m still fairly content with the rhythm game idea as it is in its current state.

CSS 388 – Week 3 – Updates

with No Comments

This week, I browsed through chiefly ACM and Google Scholar to find five papers on each of my topics. I additionally met with Charlie to go through my three ideas, getting some helpful insights about my projects and what is being looked for in the capstone project, as well as generating some ideas based off of the original three. The Copyright Detection idea in particular is likely to be cut – it’s a field already rich with research and work done, making it harder to come up with something better or wholly original.

CSS 388 – Week Two – Three Ideas

with No Comments

Project Idea One:

Name of Your Project

Implementing AI into Rhythm-Based Video Games

What research topic/question your project is going to address?

The project is going to attempt to address a common issue with rhythm games – particularly for mobile devices – where, upon finishing a song, the game goes into an ‘Endless Mode’. Currently, ‘Endless Mode’s tend to be the same song repeated, generally increasing in tempo as time goes on, which makes the game more repetitive and boring the better you are/longer you can last.

What technology will be used in your project?

I would use Unity, as well as Melodrive, primarily. Further technology, including alternative music-based AIs, may be explored as needed. A laptop or desktop computer to code on, as well as a mobile device, would be used for testing.

What software and hardware will be needed for your project?

The Unity game engine, Melodrive, a laptop or desktop, and a mobile device.

How are you planning to implement?

I would implement this project by first creating a simple rhythm-based game (one example of such would be the popular Piano Tiles), in Unity, creating a system by which the game can generate a level based on any song uploaded by the user. Once that framework has been created, I would use Melodrive’s Unity API (or create an API using another music-based AI if necessary) to generate infinite new music for a truly ‘Endless’ experience that will never bore the user with repeated music. The implementation would also allow a user to save the songs generated by the AI – each song would be of a similar length to the original song, and after each song, would become more intense/fast-paced/harder.

How is your project different from others? What’s new in your project?

My project is different from others due to the two fields it delves into – video game design, and AI. The project would be new from its conception; from what I can tell, there are currently no games or apps on the mobile market that can offer a non-repetitive ‘Endless Mode’ upon completion of a song/level. Similar technology could be used in any other video game, as well, instead of a repeating song serving as the background music.

What’s the difficulties of your project? What problems you might encounter during your project?

The first difficulty I’ll have with this project is coding the initial game – while manually creating parameters for set songs (i.e. when to make objects for the player to tap, where to place them, and so forth) is easy, albeit time-consuming, making a program to dynamically create the level based on any input sound clip will be considerably more difficult. Additionally, depending on the limitations of Melodrive’s Unity API, it may be a challenge to create infinite music during the play session. If that is the case, I’ll have to jury rig it to do so, or alternatively create an all-new API using one of the other various music-based AIs.

Project Idea Two:

Name of Your Project

Copyright Infringement Detection in Music


What research topic/question your project is going to address?

There have been two major affronts on copyright law in recent times – a surplus of questionable copyright claims by various groups for audio and visual copyright infringement on YouTube, and the Flame v. Katy Perry trial, in which Flame claims Perry infringed on the copyright of his song Joyful Noise in her song Dark Horse. Currently, Flame has won the case, though it is likely to be appealed. While I am skeptical of the legitimacy of his claim, as well as many of those on YouTube, it is important for content creators and musicians to be aware if they are liable to be sued, or said to be infringing on copyright. As such, this project will attempt to create software which compares an input audio file to a library of music, to see if it is at risk.


What technology will be used in your project?

I’m unsure how exactly it would be handled at the moment, but extensive use of databases and database management software would be utilized – likely a database consisting of compressed data about each song, which it would compare to the input file.

What software and hardware will be needed for your project?

A desktop – and, in all likelihood, something holding such a database will need to be hosted on a server, if one is not readily available.

How are you planning to implement?

I would pull from databases of music in order to check them against the input file to check for liability of copyright infringement – potentially including a “how likely” measurement. I would look into Shazam and Google’s “what song is this” to see if I can incorporate any of that software. 


How is your project different from others? What’s new in your project?

This project differs from others in the particular scope of the problem it is addressing – I’ve yet to see similar projects dealing with music in such a way, particularly in accordance to copyright law. I may be able to use some information found in prior projects having to do with databases of music (such as a music recommendation project I saw last year).

What’s the difficulties of your project? What problems you might encounter during your project?

Assuming I’m able to find a database to suit my purposes, the greatest difficulties will likely be time-based: going through a database of ‘every’ song would likely be time-consuming, so I’ll have to find ways with which to quicken the process. If there is no such database available, then I’ll face another problem – I’ll have to make my own database, likely with only a selected number of songs, as a test case for the project, which will turn more into a proof-of-concept than a useful tool.

Project Idea Three:

Name of Your Project

(Visual) Art to Music


What research topic/question your project is going to address?

Music is made with waves of sound; color is made with waves on the electromagnetic spectrum. Given the similarity, and the proximity to each other as art forms, this project seeks to turn a visual art piece into music, or vice-versa, by mapping various colors to different pitches of sound, based on the properties of their respective waves.


What technology will be used in your project?

Depending on which languages’ libraries work best with sound and image reading/manipulation, I’ll base the project around that language – this is something I would ask a professor for help in determining. 


What software and hardware will be needed for your project?

At its core, no special software should be needed besides what I already have on my computer – language interpreters, code editors, etc.


How are you planning to implement?

I’d like to build this project from scratch. There would be a lot of work at the beginning, just trying to map images to sounds in a way that a painting wouldn’t come out to be a completely garbled mess – if I can’t find a way to do so without destroying the integrity of the project, it would mostly be used for turning songs into a visual representation. Currently, the plan would be to map each note to a new pixel, or batch of pixels.


How is your project different from others? What’s new in your project?

I haven’t seen a similar project like this via a few searches; as far as I can tell, this is more or less an original idea. What’s ‘new’ in this project is the transformation from audio to visual and vice-versa, aiming to result in a generally satisfactory end result.


What’s the difficulties of your project? What problems you might encounter during your project?

I believe the most difficult portion of this will be in trying to turn a complicated picture into a satisfying, not convoluted and/or unpleasant, audio file. I do, however, believe the song-to-visual component would be significantly easier, though the resulting images may need to be kept rather vague as a result. The implementation of both sides could change drastically over the course of the project in order to get a pleasant result.

CS 388 – Week 1 – First Idea

with No Comments

Name of Your Project

Implementing AI into Rhythm-Based Video Games

What research topic/question your project is going to address?

The project is going to attempt to address a common issue with rhythm games – particularly for mobile devices – where, upon finishing a song, the game goes into an ‘Endless Mode’. Currently, ‘Endless Mode’s tend to be the same song repeated, generally increasing in tempo as time goes on, which makes the game more repetitive and boring the better you are/longer you can last.

What technology will be used in your project?

I would use Unity, as well as Melodrive, primarily. Further technology, including alternative music-based AIs, may be explored as needed. A laptop or desktop computer to code on, as well as a mobile device, would be used for testing.

What software and hardware will be needed for your project?

The Unity game engine, Melodrive, a laptop or desktop, and a mobile device.

How are you planning to implement?

I would implement this project by first creating a simple rhythm-based game (one example of such would be the popular Piano Tiles), in Unity, creating a system by which the game can generate a level based on any song uploaded by the user. Once that framework has been created, I would use Melodrive’s Unity API (or create an API using another music-based AI if necessary) to generate infinite new music for a truly ‘Endless’ experience that will never bore the user with repeated music.

How is your project different from others? What’s new in your project?

My project is different from others due to the two fields it delves into – video game design, and AI. The project would be new from its conception; from what I can tell, there are currently no games or apps on the mobile market that can offer a non-repetitive ‘Endless Mode’ upon completion of a song/level. Similar technology could be used in any other video game, as well, instead of a repeating song serving as the background music.

What’s the difficulties of your project? What problems you might encounter during your project?

The first difficulty I’ll have with this project is coding the initial game – while manually creating parameters for set songs (i.e. when to make objects for the player to tap, where to place them, and so forth) is easy, albeit time-consuming, making a program to dynamically create the level based on any input sound clip will be considerably more difficult. Additionally, depending on the limitations of Melodrive’s Unity API, it may be a challenge to create infinite music during the play session. If that is the case, I’ll have to jury rig it to do so, or alternatively create an all-new API using one of the other various music-based AIs.