Tag2015

2015 Winter Cryptography Project – P / NP, The Clique Problem

I recently read a book titled The Golden Ticket by Lance Fortnow in which the P / NP Problem and its importance to the theory of Computer Science are heavily discussed. Explained briefly, the P / NP Problem asks if every single problem can be solved quickly and efficiently. In relations to cryptography, P / NP is crucial because it allows for the creation of hard math problems to, for example, be able to hide private information on a public network. One of the most famous P / NP Problems, the Clique problem, was discussed and is, in my opinion, one of the more intriguing problems in the book.

Take, for example, a society called Frenemy, in which every citizen either has a 50% chance to be friends or a a 50% chance to be an enemy with a person they just met. In Fortnow’s book, he details how it would be virtually impossible to find all cliques of varying sizes (difficulty/time to find all cliques increases as size of desired clique increase) inside of this society (A clique is a group of people in which everyone is friends with all other members). At first, I didn’t think it would be that difficult to find the largest clique, but after making a simulation for myself, I was proven otherwise.

I stayed true to the Frenemy rules in my simulation – every time someone meets someone else, they have a 50% chance to either be their friend or their enemy. I experimented with varying population sizes as well as varying amounts of citizens in which each person can meet. I made two methods in order to experiment, one in which everyone meets everyone else in the society, and one in which everyone meets everyone on their own street (9 total streets in which 1/9 of the total population lives on each given street). To make the data easier to set up and understand, each citizen is given an identification number as well as an ‘address’, the number of street they live on. I aimed to find every single clique of size 3 in the society, and to see for myself how difficult it became to find them as population size increased.

Screen Shot 2016-01-13 at 1.49.10 PM

A couple of cliques present in a population size of 100 in which citizens met everyone on their own street.

I found that the total time to generate the population and find cliques of size 3 took much much longer the higher the population size was, which makes sense in hindsight. Adding a single citizen to a population size of 20 where everyone meets everybody doesn’t have nearly the impact as adding a citizen to a population size of 10,000,  as the number of updates a computer has to make to each citizen’s list of friends and enemies for a population of 10,000 is much larger. When I was reading Fortnow’s book, I didn’t put much thought into the rate at which the updates change as population size grows. I naively thought growth was generally linear, allowing total computations to find all cliques of size 3 to be relatively small compared to the actual value. Only after I made my own simulation did I truly understand the difficulty surrounding finding a solution to the Clique problem. If you find find yourself with some free time, I would recommend making a simple simulation like this for yourself so you can see the results firsthand.

I’d like to thank Lance Fortnow for pushing me to explore this problem through his book, as all P / NP problems (not just the Clique problem) are profoundly interesting in their own rights. And, if you’re planning to look more into the P / NP problem, Lance Fortnow’s The Golden Ticket is a must read. The discussion as well as the existence of the P / NP Problem has really expanded my understanding of Computer Science and the importance of not only the physical act of coding, but the theory behind CS as well.

2015 Winter AI Project – Wall Avoider

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

During the holiday break, I decided to dedicate my programming efforts towards learning Neural Networks. I knew a little bit about them before hand, like the architecture and the theory of how they worked, but had no idea how to implement them into my own projects. The wall avoider project was dedicated towards making a very, very simple Neural Network that progressively got better at avoiding walls using a genetic algorithm.

But first, I will briefly explain what Neural Networks are. In short, they are systems modeled after the human brain – that is, they use a set of ‘neurons and synapses’ that manipulate a given set of input data so that a desired output is reached. The issue is, however, that we do not know the correct weights and values necessary so that the network will give us our desired output. There are several ways of determining error/finding the best set of values so that a desired output is reached. For this project, I’ll be using a genetic algorithm to essentially determine the best network, and ‘breed’ it with other good networks, resulting in an equally if not better Neural Network. This occurs by choosing 2 Neural Networks that result in a long run (doesn’t hit the wall) and randomly choosing weights from both networks to result in a new Neural Network.

neural-network

Anatomy of a Neural Network.

My project’s Neural Network has 3 input nodes and 1 output node. The value of the input nodes is either a 1 or a 0, depending on whether or not its respective ‘eye’ (line extending from the center of the runner) hits a wall or not. These values are each multiplied by their respective weights (random at first, then altered through the genetic algorithm) and summed together, then put through an activation function. In my example I used a variation of a sigmoid function (1 / (1 + e^(-x)) as an activation function. Sigmoid functions are nice because their ranges are typically (0,1), resulting in either an ‘on’ or an ‘off’ setting. If the a(x) > 0.5, it turned right. If a(x) < 0.5, it turned left. If a(x) = 0.5, it didn’t turn at all.

Screen Shot 2015-12-23 at 12.47.08 PM

Graphic of my Neural Network. i = input (eyes), W = weights, a = activation

After constructing this system, I began running tests. As the objective wasn’t a hard one to learn, the networks involved mastered the challenge quickly. In the gifs, you can see on the left the trial number and the fitness value (how well the network is doing) for each member of the population. p# = parent, c = child, and ^ indicates current best network.

trial 1

Trial 1, struggling.

trial 7

Trial 7, a pretty decent network is bred, but still room for improvement.

trial 25

Trial 25, all networks are doing very well.

This was a great intro to Neural Networks. I definitely want to take this idea further and create deeper, more complex networks for harder tasks.

If you want the source code, you can get it here (Java, 26kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqSVRvbXpzcm1VRVE/view?usp=sharing

2015 Winter AI Project – Maze Runner

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

After my Chess project, I wanted to become a little more serious in my research surrounding artificial intelligence. I looked up everything from examples of how it was used in day to day life to videos of lectures by college professors. I learned a lot about the concept of reinforcement learning through these lectures and even watching college Thesis projects, making me want to program something to test out the idea.

First, however, a very brief overview of my current understanding of reinforcement learning. Reinforcement learning is a system in which the AI learns about the environment it’s put in through a system of actions and states. The ai is rewarded or punished through the various states (depending on whether you want to maximize utility or minimize cost, either way is fine, but for my example I will be using a reward system) for good actions versus bad actions, but the AI doesn’t know which actions are good or bad until it experiences them. Thus, after a long time, the AI should begin to gravitate towards actions that garner it the largest sum reward.

Alright, now to my project. I decided to go with something simple and intuitive to explore the concept of reinforcement learning through creating an artificial intelligence that would find the quickest path to an exit in a predefined maze. As mentioned before, the AI started with knowing absolutely nothing about the maze it was placed in, and must learn everything from scratch.

Screen Shot 2015-12-18 at 2.09.06 PM

Original board displaying reward values for each tile (currently 0 because the AI doesn’t know which tiles are good and bad yet). The blue tile is the current position of the runner (controlled by the AI) and the green tile is the exit.

The reward system for the AI essentially takes in all of the rewards of the tiles it visited and sums it together, divides that number by the amount of tiles visited, and uses that value to update the reward value for all of the tiles it visited. The tiles takes that value and averages it into all of its previous rewards given from previous trials. The AI gets a heavy reward boost for completing the maze faster than before, and a heavy punishment if it was slower (increase/decrease overall reward by a factor of 2). Thus, over time, the AI slowly becomes better and better at solving the maze.

10

The 10th trial. You can see the general path the AI takes (tiles with the highest reward) is rather sporadic.

50

The 50th trial. The AI has begun to iron out all the kinks in the path, but it isn’t yet the most efficient.

100

The 100th trial. You can see the AI has almost found the most efficient path.

I heavily enjoyed this project, as watching something learn, especially a program, is really interesting. I’m really excited to see where I can take this in the future.

If you want to play around with this, you can get the source code here (Java, 24kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqNUVpTFFQdUNaaUk/view?usp=sharing

2015 Winter Game/AI Project – Chess

The 2015 school year was when I began dabbling around AI. The idea always amazed me. Everything from movie depictions of them to the real practical uses they have in modern day life was interesting, and I wanted to know more about it.

I had been wanting to make chess for a while, as I’ve been a fan of the sport. What better way to make it than to explore simple self learning artificial intelligence at the same time? However, before getting to the AI, I first had to make the game. The idea of generating a list of possible moves was was simple enough, but the real challenge was implementing more complex moves like castling, checks, stalemates, etc.

Screen Shot 2015-12-11 at 6.49.45 PM

Final game showing all possible moves.

After spending a while messing around with the game itself, time came to begin working on artificial intelligence. Having known nothing about the subject (I started learning about concepts like neural networks and genetic algorithms shortly after), I decided on a simple concept. The artificial intelligence would save all the games it had previously played and look back on them, determine the statistical probability that it would win if it did that move (based on the outcome of the games that had the same move), and decide whether or not to proceed from there. If it decided that the move was bad, it would instead do a random move and add that to the database.

There were a few errors in my method, however, that presented themselves as I neared the end of writing the program. Mainly, it would take ages before the AI became even decent at the game, because, no matter how fast it played the game, it would almost always play a new game every single iteration (Hardy’s estimate of possible chess games is roundabout 10^(10^(50)), more on the subject here), causing the file that contained all of the previous games it had played to become unbelievably massive before it became good at the game.

Screen Shot 2015-12-11 at 8.26.02 PM

A fraction of a single game stored in a .txt file.

Despite all short comings, this project was still a simple and a fun introduction into the world of self-learning artificial intelligence.

2015 Fall Game Project – Ludum Dare 33

The Ludum Dare Game Creation competition, if you’re unfamiliar with it, is a competition in which entrants are given a central theme and are required to make an entire game – code, art, sound, etc. – in two days.

The theme of the 33rd Ludum Dare competition was ‘You are the Monster’. Other games I first thought of were games similar to that of Rampage for the Nintendo Gamecube, a game where you are a giant monster destroying cities and causing chaos just for the hell of it. Thus, Colossus Guard was born.

Screen Shot 2015-12-10 at 12.12.52 PM

Main menu.

I took the idea from Rampage and combined it with an endless runner; the player can only move right, but can destroy houses and step on civilians as they run away.

Screen Shot 2015-12-10 at 12.13.32 PM

Final product.

It was the first game in which I implemented more complex animations, as well as any sort of sound in my games. I expected that process to take a very long time, but was pleased to find that it was shorter than I had expected. This game is also the first game in which I used a delta time to try and increase frame rate, of which increased playability on other machines.

I very heavily enjoyed this process and learned a surprising amount in those two days.

The source code can be found here (Java, 201mb): https://drive.google.com/file/d/0Bz_0wgRmDpKqckpUTnNXVFdTclU/view?usp=sharing

 

2015 Fall Cryptography Project – Password Generator

During the early part of 2015 I was fairly interested in cryptography. In particular, I was interested in penetration testing of all sorts – accounts, WEP and WPA wifi password cracking, etc. Of course, I was only cracking  accounts I owned.

The pinnacle of my interest with pen testing came when I decided to make a password cracker. For example, say my password is C0d3BR3ND4N!. You enter a few words that you think your target would include in their password, as well as a few parameters, and let it go. The magical components for my password would simply be brendan and code, input with the parameters –H (includes capital permutations as well as numbers substituted for letters), –P !#(adds ! and numbers 0-9 before and after each possible password), and -w (writes to file), and after a few seconds, the program would find my password with relative ease.

Screen Shot 2015-12-12 at 12.45.11 PM

Portion of the words generated.

It essentially mashes all the words it was given together, finds all of the capital permutations and all forms of numbers switched with letters, etc., then adds it to the file.

Screen Shot 2015-12-12 at 12.41.21 PM

Terminal display.

Because this program can crack passwords, I’ll refrain from posting the source code. You can, however, access the .txt file I generated for the purposes of this post here: https://drive.google.com/file/d/0Bz_0wgRmDpKqNHFtU2pLZV96SDg/view?usp=sharing

2015 Summer Game Project – Fast Paced Rogue-Like

In an attempt to gain familiarity with different approaches and methods towards game design and implementing images/text files into projects, I created several short games over the summer that were fast, fun, and a learning experience.

After spending a while playing games like FTL, the Binding of Isaac, and even watching twitch.tv streams like FourBitFriday’s Catacomb Kids really interested me in the realm of rogue like games. This time, I wanted to make a game that was as organized as I could make it while also advancing my skills in animation and level generation.

In my previous games, animating had been a real challenge because I had never implemented something solely for the purpose of animation. With this game, however, I decided to simplify it so that it wouldn’t be such a pain to implement every time I wanted a new entity. This determination propelled me to learn as much as I could about the animation process and sprite sheets as a whole, and in the end I found a pretty simple way of handling animation.

Screen Shot 2015-12-09 at 4.24.22 PM

Final product

I also spent a long time figuring out how to add smooth camera controls. After playing the game with a rigid camera, i knew it was time to make it smoother in order to encourage a faster pace throughout the game. So I learned a lot about how cameras and perspectives can interact with the current view point, knowledge that would become invaluable to me in future projects.

Screen Shot 2015-12-09 at 4.25.54 PM

Close-up of an enemy mob in the game

Lastly, I learned a lot about level creation. I had never made something that could load levels before, but I wanted to try and make a system for that specifically for this game. I decided to use a simple program like MSPaint to draw out the levels using colors to represent the various blocks, load them in, and connect them room by room. This proved more of a challenge than I had originally thought, as I did not anticipate the difficulty that came with reading RGB values of individual pixels. However, I pushed through by learning not only how to look at each individual pixel, but also the notation for RGB as well.

Screen Shot 2015-12-09 at 4.28.45 PM

An example of what a level looks like in MSPaint. Gray/Black = walls, Blue = enemy spawn points, etc.

This project was a fun one. I feel it really enhanced my organization skills when it comes to code as a whole, as well as strengthened my knowledge on how each individual part of a project comes together in the end to make one single product.

You can find the source code here (Java, 742kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqTXNCSjFEU0V3X2M/view?usp=sharing

2015 Summer Game Project – Creation Sandbox

In an attempt to gain familiarity with different approaches and methods towards game design and implementing images/text files into projects, I created several short games over the summer that were fast, fun, and a learning experience.

This game is a side scrolling-open world-randomly generated creation sandbox similar to games like Minecraft or Terraria. In fact, the inspiration from this game came from spending several weeks replaying Terraria. Anyway, I wanted to explore concepts like world generation, mob behavior, inventory systems, animation, and more.

Screen Shot 2015-12-09 at 3.43.15 PM

Final product.

After creating a basic world, I decided to first take on mob interaction. Having not done anything like this before, I found it challenging at first to figure out how to not only spawn mobs in strategic places around the player, but how they could be unique from each other and their interaction with the player. I added 2 mobs: a snake, and a slime. They differ in behavior in that the snakes are faster while the slimes jump higher.

Screen Shot 2015-12-09 at 3.55.14 PM

Some slimes jumping above the player.

Another one of the most challenging yet most interesting parts of making this game was the world generation. I spent about 3 days wondering how I would add interesting caves, or even things like ore into the game. My final solution was to add something of a ‘miner’, a randomly placed dirt destroyer that cleared out the underground area it started in and moved around randomly until it decided to stop.

Screen Shot 2015-12-09 at 3.49.39 PM

Example of a very small fraction of the world, but in tile IDs. -1 = empty space underground, 0 = sky, 1 = dirt, 2 = grass, 4 = tree, etc.

All in all I learned a lot about this project, whether intentional or unintentional. One of the more important lessons it taught me is that organization, especially between objects, is crucial for keeping the entire project as a whole neat and orderly.

You can find the source code here (Java, 311kb): https://drive.google.com/file/d/0Bz_0wgRmDpKqZFpfMHlEUlNvdU0/view?usp=sharing

 

© 2024 Brendan McCloskey

Theme by Anders NorénUp ↑