Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Work in Progress / Neural Network Demo... Minesweeper.

Author
Message
Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 31st Mar 2010 01:46 Edited at: 2nd Apr 2010 13:40
Firstly, thank to RiiDii for posting a bit of a snippet that helped me to make this. I also have to say that most of the program is also my own.

Well I have got a Neural Network sort of working a bit, it has an IQ of about 2 lol! :grin: But sometimes it appears to be thinking, and is quite scary.

The idea of this Neural Network is for the tank to figure out its mission, and then to achieve its mission of collecting the mines in the minefield. It also has to figure out how its tracks work. None of the AI in this program helps the tank at all, all of the AI is a virtual program of an actual brain, and so this program learns all by itself what it has to do.

Be prepared for a long wait however, this current version starts to look more intelligent at about Epoch 30000.

I don't know if the tank ever figures out the entire process. I watched it for over an hour, and it was only heading for half of the mines.


Windowed Version...V2 of Neural Network is here...

http://homepage.ntlworld.com/pinchopaxton/Minesweeper.rar

When You have unpacked it you need to find the Minesweeper folder, and double click the EXE Icon inside.

Darth Vader
19
Years of Service
User Offline
Joined: 10th May 2005
Location: Adelaide SA, I am the only DB user here!
Posted: 31st Mar 2010 04:15
Sounds interesting. I remember when RiiDii was working on this AI was quite fascinating. Also I have Windows 7 Professional 64bit and your program works

I would love to watch it but don't have the time, but if you recompile it into a windowed version, then I can do some work and watch it...

Awesome stuff though.

Benjamin
21
Years of Service
User Offline
Joined: 24th Nov 2002
Location: France
Posted: 31st Mar 2010 05:10
Sounds pretty interesting... although I don't want to have to wait that long personally. I agree that a windowed version would be good. Also, I see that you seem to have a sync limit of 60, I'd suggest simply enabling v-sync instead as it'll suck up less CPU (which is important if you're running this app an hour on a laptop for instance).
Virtual Nomad
Moderator
18
Years of Service
User Offline
Joined: 14th Dec 2005
Location: SF Bay Area, USA
Posted: 31st Mar 2010 06:22
i'd appreciate a windowed version, too

also, the sync 60 is basically limiting it to X amount of thoughts per second, right? (not sure if you've capped it otherwise, as well). i'm curious why this route was chosen versus allowing it to learn faster, thus negating the "20 minutes before a sign" aspect.

interesting stuff (that i know nothing about ).

ah, btw... i expected the cpu to play the standard minesweeper game based on your title. that would be interesting as well.

Virtual Nomad @ California, USA
AMD Phenomâ„¢ X4 9750 Quad-Core @ 2.4 GHz . 8 GB PC2-6400 RAM
ATI Radeon HD 3650 @ 512 MB . Vista Home Premium 64 Bit
Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 31st Mar 2010 12:35 Edited at: 31st Mar 2010 12:36
OK I'll turn off the sync limit. I have a few ideas to add to it. For a start I need to turn off the learning. I think that the tank may well have learned the problem, but is still trying to experiment with its tracks, so it looks like it is messing around too much. I was also thinking of having a second tank just watching the first tank, so instead of trying to learn the problem from scratch it now has a tutor, or you could call it imagination, as it will share the same brain.

Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 1st Apr 2010 01:33
I've worked on the graphics a bit, and got them ready for the new updates that I will be adding...



Plystire
21
Years of Service
User Offline
Joined: 18th Feb 2003
Location: Staring into the digital ether
Posted: 1st Apr 2010 05:30
I love these "AI learning" demos. Watched the tank wander around for quite some time... up to Epoch 237 whereupon I figured he wasn't learning well enough and exited.

Some things I noticed about the program:
- Many times, the tank undergoes an epoch, which seems to have only effected one thing... which direction he wanders in a circle continuously. Sometimes the sudden change caused him to wander into a large pack of mines, causing his score to go up REALLY high, apparently encouraging this behavior. The score was blind luck and was not favorable in any way, but because of the sudden change, he earned a higher score for stupid behavior. This is bad.
- Favorable behaviors don't seem to last long for some reason. At Epoch 60 and much later at Epoch 212, the tank actually showed signs of intellect. It drove straight forward for a bit (driving straight does NOT come often), and turned slightly when it "noticed" a mine nearby. Upon changing to the next Epoch, however, the behavior changed vastly and it went back to driving in circles.
- Very few times did the tank actually show any sign of a change in behavior relating to it's sensory. It wasn't often that the tank actually responded to nearby mines. Perhaps there should be some form of "positive reinforcement" to the AI's reaction to nearby mines. If it hit a mine after changing course in response to a sensor, then it should receive an extra boost in score. I think this would encourage the AI's play with sensors and would show some more positive improvements to the way it reacts to its surroundings.


Just my thoughts. But the "blind luck syndrome" seems to play a huge role here.

Looking forward to your next upload.


The one and only,


Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 1st Apr 2010 11:30
I think it's because the learning is always switched on, so even when it becomes intelligent it goes back to learning again. I tested a faster epoch change, and the tank switched direction a lot more, it seemed to go for the mines more. I need to switch off the learning.

Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 2nd Apr 2010 00:41 Edited at: 2nd Apr 2010 00:41
New version available. Windowed, more intelligent, and no sync rate.

The buttons don't do anything yet.

First page. Same link as before.

AndrewT
17
Years of Service
User Offline
Joined: 11th Feb 2007
Location: MI, USA
Posted: 2nd Apr 2010 00:49
For me the tank seems to be intentionally avoiding the mines--and it gets pretty damn good at it, too. It's learned to find its way through very tight and complex patterns without touching a single mine.

i like orange
Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 2nd Apr 2010 01:16 Edited at: 2nd Apr 2010 13:44
Maybe because it scores for getting close to them, but I watched a movie, and left the program running. It got to 30000 epochs, and it was quite good at aiming for the mines half of the time.

Plystire
21
Years of Service
User Offline
Joined: 18th Feb 2003
Location: Staring into the digital ether
Posted: 2nd Apr 2010 22:59
Had it run for quite a while (Epoch 634k+), and here's the results:
The tank doesn't aim for the mines any better than it did around 1000 epochs.

Perhaps there's something wrong with the way it's learning, or supposed to learn. I don't think you're allowing the neural network to train long enough in each epoch, thus even if a good set of neurons are present, they get scored VERY poorly because the tank doesn't even have TIME to move to a mine and pick it up. This is likely causing confusion in the training process.


I've attached a pic. Is the tank supposed to grow larger over time?


The one and only,


Attachments

Login to view attachments
Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 3rd Apr 2010 03:12 Edited at: 3rd Apr 2010 03:13
How can the tank grow larger? I have re-written the program, and it's a bit better. I think that my main problem is not understanding how the bit left, bit right works >> << and the XOR ||. I might just have to re-write the whole thing using a different method.

Link102
19
Years of Service
User Offline
Joined: 1st Dec 2004
Location: On your head, weeeeee!
Posted: 3rd Apr 2010 03:16
I ran the simulation up to 50000, it's still pretty random though.

Do you have any documentation on this? I'd like to try this for myself.

Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 3rd Apr 2010 03:36 Edited at: 3rd Apr 2010 03:45
I have links to C++

http://www.ai-junkie.com/ann/evolved/nnt1.html

http://www.adit.co.uk/html/programming_a_neural_netw

and VB

http://paraschopra.com/tutorials/nn/index.php

And the code that I adapted...

http://forum.thegamecreators.com/?m=forum_view&t=68559&b=6

Here's my source code. As you can see, I tried to add more neurons by shifting 4 bits instead of 2. Not sure if that is right. I need a better way to add more neurons as I don't understand bits.



Plystire
21
Years of Service
User Offline
Joined: 18th Feb 2003
Location: Staring into the digital ether
Posted: 3rd Apr 2010 09:57
00110010

Bit shift left by 2 bits

11001000

All of the bits move in that direction.

Same applies for bit shifting to the right:

00110010

Bit shift right by 2 bits

00001100

Anything that would go off the edge, is gone... forever.


The easiest way to understand XOR is to think about a kid in the candy store. He takes 2 pieces of candy to his parents and asks if he can have them. His parents say he can one OR the other. It is understood in this case, that they are specifying an XOR and not a simple OR. If it were a regular OR, the child could have BOTH pieces of candy, but with an XOR he can one or the other, but NOT both.

Here's a truth table if you would prefer that instead:

0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0


Anyway. I don't know how the tankc ould grow larger, but as you can see in the pic, it did. It was not that large when I left, but it was when i got back.


The one and only,


Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 3rd Apr 2010 12:09
Well that's how they work, but what do they do to the neural network?

Plystire
21
Years of Service
User Offline
Joined: 18th Feb 2003
Location: Staring into the digital ether
Posted: 4th Apr 2010 01:18 Edited at: 4th Apr 2010 01:24
Well, looking at your source, I guess I can use this as an example:


The actual code piece I found in your AI_Init, and the functions WAYYY at the bottom (when I did a search for <<

Okay, so the Curve value is given the result of "Set_Left_Nibble(3,rnd(14)+1)". So, let's run through the function and see what happens. We'll assume the random number returned is 10,

bt = 3 = 00000011
value = 10 = 00001010

First we bit shift the value to the left by 4 bits. This gives us:

value = 160 = 10100000

Now, we XOR bt and value together and return... this part... I'm almost pretty sure is wrong in your code, due to what the result is.

00000011
--XOR--
10100000
-----------
10100011

So bt ends up equaling 163.

However, this function was only supposed to set the LEFT nibble of the byte, not the entire byte. So by the end of this, our byte SHOULD have 0000 on the right. [EDIT] I take that back. The function does what it should do, which is set the left-hand side of the byte and leave the right alone.

So, now we take this value and toss it into the Set_Right_Nibble function and see how it goes. We'll assume the random number this time is 5.

bt = 163 = 10100011
value = 5 = 00000101

Firstly, this value is bit shifted left 4 bytes and then right 4 bytes. This essentially zeroes out the left 4 bits, since they'll fall off the left hand side and then when it is shiftedback, they come back in as 0's. So, value remains unchanged. Now we XOR bt and value together again and return the result.

10100011
--XOR--
00000101
----------
10100110

So, the final Curve value is 166. Now, if you were wanting a completely random byte, at least you got it.


How the Curve effects the neural net is another story that Iw on't get into. Just running through your code, explaining how the values would be effected. Hope that helps

If you were stumped on a different section of your source and not this particular section, I can look into that as well.

[EDIT]
Someone correct me if I'm wrong, but I could have sworn that || was a standard OR operator, not an XOR operator.


The one and only,


Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 4th Apr 2010 05:16
I think I shall have to try to rewrite the VB code in DBP instead. I am confused by the bit shifting. My oldschool code doesn't even include types, so I am really confused by this code. I know VB so I might be able to understand it better. I really want to have multiple neurons available to me without any confusion. After the tank program I want to work on people programs, so I can't have some vague idea of what I am doing. I need things to be completely clear to me.

n008
17
Years of Service
User Offline
Joined: 18th Apr 2007
Location: Chernarus
Posted: 4th Apr 2010 23:23
No offense, but this is a horrible test for a neural network. There's no complexity to the challenge at all. The brain gets basically no benefit from learning to direct itself straight towards a mine, as it can just move randomly and achieve the same result. The lack of borders or confinement contributes to this. Instead of having so much of an open, random environment, try to make it more pocketed, perhaps Having a few specific algorithms for generating maze structures and minefields and iterating through them to force the Network to be adaptable to situations. Right now there's no way to tell if it is actually learning or not.

"I have faith, that I shall win the race, even though I have no legs, and am tied to a tree." ~Mark75
Dark Dragon
17
Years of Service
User Offline
Joined: 22nd Jun 2007
Location: In the ring, Kickin\' *donkeybutt*.
Posted: 5th Apr 2010 00:53
This is interesting. I may try somthing simular.

(\__/) HHAHAHAHAHAH!
(O.o ) / WORLD DOMINATION!!!!!!!!!!
(> < )
Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 5th Apr 2010 18:04 Edited at: 5th Apr 2010 18:08
Quote: "No offense, but this is a horrible test for a neural network. There's no complexity to the challenge at all. The brain gets basically no benefit from learning to direct itself straight towards a mine, as it can just move randomly and achieve the same result. The lack of borders or confinement contributes to this. Instead of having so much of an open, random environment, try to make it more pocketed, perhaps Having a few specific algorithms for generating maze structures and minefields and iterating through them to force the Network to be adaptable to situations. Right now there's no way to tell if it is actually learning or not."


Random attempts are how Neural Networks work. You will be able to tell if it is working when it does not look random anymore. It will go for the mines. It benefits by getting a higher score for getting the mines. Currently however, it isn't quite working right. I'm rewriting the code from scratch.

n008
17
Years of Service
User Offline
Joined: 18th Apr 2007
Location: Chernarus
Posted: 6th Apr 2010 03:27
I understand how neural networks work, but what's the point of this if there is really nothing for the neural network to accomplish? All i mean is you should make the field more interesting for the bot to move through, so it's more obvious and impressive when the network learns and progresses rather than it just zooming through the screen endlessly.

"I have faith, that I shall win the race, even though I have no legs, and am tied to a tree." ~Mark75
Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 6th Apr 2010 18:16
It's the first stage, just a test.

Plystire
21
Years of Service
User Offline
Joined: 18th Feb 2003
Location: Staring into the digital ether
Posted: 9th Apr 2010 02:34 Edited at: 9th Apr 2010 02:35
The problem I'm foreseeing with Pincho's current executable is that the AI surfs through epochs far too quickly. The AI does not have time to "try out" new network configurations and properly judge what worked and what didn't. From what I saw, the tank didn't even have enough time to ROTATE to a mine, let alone go and get one, before 10 epochs had passed. What's the point of reconfiguring the AI to something new (aka, Epoch) if the old configuration wasn't properly tested?

The problem with the program is not that it isn't interesting. The AI does not require an elaborate layout in order to demonstrate its capabilities. The way it is right now would work fine for an AI demonstration and to prove learning capabilities. It simply needs work.

@n008:

The "point" of this (to my understanding) was not to have the AI do something constructive, but to simply prove that it's working and can learn. If this cannot be shown in a test such as this, then I have to ask, what's the point of moving on?


The one and only,


Pincho Paxton
21
Years of Service
User Offline
Joined: 8th Dec 2002
Location:
Posted: 9th Apr 2010 20:19 Edited at: 9th Apr 2010 20:21
It was originally restricted to 10 seconds before a change happened, but after complaints that I had capped the changes, I speeded it up. So you say that 10 seconds is better than the current situation? Anyway, I have tried 10 seconds, 5 seconds, 1 second, 500.. etc.

Diggsey
18
Years of Service
User Offline
Joined: 24th Apr 2006
Location: On this web page.
Posted: 9th Apr 2010 21:13 Edited at: 9th Apr 2010 21:14
If you want to see any real improvement in the neural net you need to be trying hundreds of configurations each second. Each one should get at least the current equivalent of 20 seconds testing.

If you want to still have a good graphical output you should save the results from one in every few thousand tries, and then play them back while others are still being tested in the background.

At the moment, even after leaving it running for hours there is no learning at all, and the movement is still completely random. Any apparant intelligence is down to probability: sooner or later if you try random things you are going to get a sequence of moves which look intelligent. They then dissappear straight away.

Plystire
21
Years of Service
User Offline
Joined: 18th Feb 2003
Location: Staring into the digital ether
Posted: 9th Apr 2010 22:46
@Pincho:

Yes, 10 seconds would be better than 1/10th of a second for training before each epoch. The network needs to have a chance to prove itself. I think the best amount of time for it would be to give it roughly enough time to drive from one corner of the screen, to the opposite corner, and back. That will ensure the tank has enough time to gather mines... or miss a lot of mines. Gathering versus missing is what determines whether or not the network has improved, and if it does not have enough time to demonstrate this, the outcome will, in turn, become completely random.


@Diggsey:

That is true. The only way to speed up the process without altering the effectiveness is to have it run in the background, on pure maths at an uncapped framerate. But before you can do that, you need to make sure that it can function at a capped framerate with you watching it.

I don't think saving and replaying results would be a good idea, though, since the premise here is that the newest versions are the better versions. Would it not make sense to simply give the user the choice of either watching the network learn (capped framerate and watching the tank(s) wandering around doing what they're supposed to do) or to speed it up and have it run the training in the background as fast it can, while displaying minimal outputs to keep the user updated as to how well things are going (such as an average, or a "best score" for each epoch)


Just my thoughts on the situation. I would really like to make an ANN plugin for DBP, but I'm having problems creating the DLLs in VC++ Express right now.


The one and only,


haliop
User Banned
Posted: 19th Apr 2010 11:34
i find this project very very intersting!
time now in israel: 11:34 memorial day.
i will let it keep going as long as i can
, btw does it actually save its progress
or if i restart it , it will go back to 0 ?

Login to post a reply

Server time is: 2024-11-24 15:00:18
Your offset time is: 2024-11-24 15:00:18