Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Work in Progress / Artificial Neural Net Engine (ANNE)

Author
Message
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 10th Aug 2007 10:02 Edited at: 10th Aug 2007 10:16
"I'm now going to give anne the capability to take over the world. Every second it is not taking over the world it loses points."

So, what are we going to do tonight Brain?

----

ANNE is crashing a lot (so no posting tonight), and I have no clue as to why. I have caught more bugs than in an Indiana Jones movie. It might just be too many neurons... to much memory... need... more... what was I talking about?

----

Edit: O.k. I changed my mind. Here's the files (exe included). I lowered the number of neurons a bit, and it seems to work. No promises though. I have not had a successful long term test, so please let me know what you get, if anything. Also, please post the .dna files when you are done - thanks.

A little description - ANNE can now "see" a little further away and can determine distances (within sight). ANNE's visual area is the light blue circle around the AI cube. ANNE also now has almost twice as many input neurons (8 directions as opposed to 4).

ANNE now resets herself every thousand Epochs and starts learning all over again. This gives the DNA a chance to evolve better. If ANNE has been running for a very long time (several thousand Epochs, or 10's of thousands), the best observation of ANNE's performance should be around Epochs 800-1000.


Open MMORPG: It's your game!

Attachments

Login to view attachments
Teh_Face
18
Years of Service
User Offline
Joined: 1st Oct 2006
Location:
Posted: 10th Aug 2007 21:16
This is a very interesting project RiiDii, I've been watching this thread and running your ai since I found this the other day lol.

Sorry for not posting sooner and including dna files but I have also been a bit nosey and looking through your source and altering/adding stuff. Mainly adding extra functions (and an extra 2 sensors for top and bottom) the ai can use, like letting it go up and down in 3d space whilst being bombarded by 200 dummy cubes which also able to move within the extra spatial dimension rofl, anyway so any dna files probably would of been useless.

Whilst looking through the source and messing with stuff (as you do lol) your dynamic function engine among other things is quite impressive aswell, I wondered how you managed to give the neural net the ability to execute whichever functions it wanted.

Before your latest update with the visual area to determine distances, anne did seem to figure things out better and was able to avoid the cubes more often.
Whilst watching the ai with your latest it seems to spin into 'crowds' of cubes quite often, sometimes spinning in circles around the cubes aswell.

I'm probably a million miles out, but it looks like it might be too much for the neural net's size to properly make sense of the amount of information it's getting from it's sensors. Admittedly I haven't looked at the source of your latest yet though so I have no idea.

Anyways, I've run your latest anne for a few hours and saved the dna files at epoch 970 of each cycle for the first two cycles. The third cycle it crashed so I've included that dna file aswell and the debug.txt if it helps at all. (there's three dna files in total)

Attachments

Login to view attachments
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 11th Aug 2007 07:04
Thanks Teh_Face for all the comments.

Quote: "Mainly adding extra functions (and an extra 2 sensors for top and bottom) the ai can use, like letting it go up and down in 3d space whilst being bombarded by 200 dummy cubes which also able to move within the extra spatial dimension rofl, anyway so any dna files probably would of been useless."


How did it do?

Quote: "Whilst looking through the source and messing with stuff (as you do lol) your dynamic function engine among other things is quite impressive aswell, I wondered how you managed to give the neural net the ability to execute whichever functions it wanted."


Functions can be added manually to the script source file (the last file in the project). There is also a Criterion Coding System, which is utility for creating DFE projects and then creating a script file for them. Makes adding hundreds of functions a snap. If you would like to play around with the CCS, I can send you copy to install and test.

Quote: "I'm probably a million miles out, but it looks like it might be too much for the neural net's size to properly make sense of the amount of information it's getting from it's sensors."


While I don't think this is why it is crashing, I do agree with you, and I don't think the current neural net size can handle the wide range of inputs. Not only have 4 directions been added, but the input is now analog (distance based) instead of binary (collision or not).

Quote: "Anyways, I've run your latest anne for a few hours and saved the dna files at epoch 970 of each cycle for the first two cycles. The third cycle it crashed so I've included that dna file aswell and the debug.txt if it helps at all. (there's three dna files in total)"


Thanks!

@dabip: Forgot to shout a "Thanks" out to you for posting the dna file. So... "Thanks!"


Open MMORPG: It's your game!
dab
20
Years of Service
User Offline
Joined: 22nd Sep 2004
Location: Your Temp Folder!
Posted: 11th Aug 2007 21:11
That new demo is cool. The view thing seems a tad too big, but then again, it could be that size for better viewing. I donno, but nice job. I'm loving this project. Thanks for the thanks.
Teh_Face
18
Years of Service
User Offline
Joined: 1st Oct 2006
Location:
Posted: 12th Aug 2007 03:38 Edited at: 12th Aug 2007 03:43
Overall anne seems to handle 3 dimensions fairly well (to an extent), from what I could see at least, despite giving it an extra dimension would increase the possibilites for movement exponentially. It is quite amusing to watch aswell if nothing else. Sometimes it'll do three-quarter circles forward and backwards and do crazy looking stuff, most times it just spins within short ranges on as many axis as it can. Other times it cruises around weaving in and out of the dummy cubes.

The scores the ai gets is mostly positive but I think that's because there's alot more space where there's no dummy cubes to collide with.

I've noticed that the more 'spinny' functions the ai can use, the more 'spinny' it will be, seems fairly obvious I suppose. I haven't run this altered one for any extended period of time so for all I know it could start getting really good in all three spatial dimensions.

I've only just recently adjusted the number of dummy cubes to 5 and made them point in the direction of the ai cube (it all too often figures out sitting still gives it the best scores), so effectively they follow and chase it and the ai cube looks like it's having fun, goes off the edge of the screen to wrap around the other side and taunts the dummies until they get near then it zooms off to the other side and continues taunting them rofl Your ai is actually quite capable and dynamic

I will stop messing with your source though RiiDii if you ask me too as I can imagine you may not like people doing that, also I don't want to interfere with any progress as you certainly have a better idea on this AI stuff than I do. I'll happily run your updates though for hours at a time and post any dna's if required

EDIT: I've just thought lowering the number of dummies could improve the performance of the ai cube using the distance sensors without having to increase the size of the neural net.
Code Dragon
18
Years of Service
User Offline
Joined: 21st Aug 2006
Location: Everywhere
Posted: 12th Aug 2007 04:48 Edited at: 12th Aug 2007 05:01
This sounds really interesting. I'll start following this thread. I remember writing a program like this (much simplier of course) in QBASIC. It would generate 100 random strings and score each one for how close it is to "Methinks it is like a weasle", then take the best one and mutate it into 100 more strings. It usually got the message pretty fast, for entire paragraphs it could take hours though.

Quote: "The second one I like because it figured out that by spinning, it could compensate for its limited sensing capabilities. Instead of only four directions, by spinning, it could sense in all directions."


Ahh! It's too smart. Next you're going to be saying:

"I'm going to give ANNE the ability to write games in DBPro. Everytime it makes a WIP post that goes up in flames it loses points."

I still believe us humans are the greater power. If you put a neuron under a microscope, it can't make something as abstract as a thought. 100 neurons, still no thinking. Why should 2 billion think? That's the question, which is why I believe that we don't really think with our brains.


Dragonware Games - Free Game Downloads
Dr Manette
18
Years of Service
User Offline
Joined: 17th Jan 2006
Location: BioFox Games hq
Posted: 12th Aug 2007 06:54
And code dragon whips out his theoretical thoughts in the middle of a WIP. Very cool, I'm enjoying watching Anne grow.

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 12th Aug 2007 08:26 Edited at: 12th Aug 2007 08:30
Quote: "I will stop messing with your source though RiiDii if you ask me too as I can imagine you may not like people doing that, also I don't want to interfere with any progress as you certainly have a better idea on this AI stuff than I do."


By all means - mess with it. It is intended as a Neural Net Engine ~ a source library for folks to use for their own AI needs. The little boxes running around are simply a test to see how well (or not) the ANNEngine works.

The ideal use of ANNE will be to place it into a fully functional AI code and let the neural net call the functions, which can be either controllers for an individual AI, a group of AI (like an RTS), or for an entire game.

Quote: "Everytime it makes a WIP post that goes up in flames it loses points."


One of my ideas for ANNE is not that far off. I was toying around with the idea of writing code that could search the internet, using existing search engines, for given words or phrases like; "how are you doing today?".
[Example: One possible response I found was "Well.............My husband is acting stupid for the past week and today I decided that I was not going to bother with it I have surgery this week and I think that's enough for me to worry so let him ruin his day if he wants to so I guess my day is GREAT" (Yahoo search results)]

Boring "How to do it" stuff, so only look if you are interested.



Quote: "And code dragon whips out his theoretical thoughts in the middle of a WIP"


Lol: I think until ANNE is proven several times over, this whole thread is still theoretical*.

*Well, not the thread itself. That is real. Well, as real as a thread can be. And I suppose the WIP is real too, even if ANNE doesn't work. But ANNE should still be classified as "theory."


Open MMORPG: It's your game!
Diggsey
18
Years of Service
User Offline
Joined: 24th Apr 2006
Location: On this web page.
Posted: 12th Aug 2007 12:08
@Code Dragon
How do you know, that if you put together 100 neurons, and feed them with everything the brain gets (including input/output from sensors/motors), that they don't think? It would obviously be very stupid, but it could still think. Also, it would probably copy ANNE, and just sit there and do nothing, because there was no reason to do anything.

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 13th Aug 2007 01:44
ANNE update. Was bored while watching ANNE learn, so I did some cheap graphics.


Open MMORPG: It's your game!

Attachments

Login to view attachments
Gamers for sale
19
Years of Service
User Offline
Joined: 19th Nov 2005
Location: Some where beneath the elements
Posted: 16th Aug 2007 02:45 Edited at: 16th Aug 2007 05:36
I am a little confused in how ANN works. How does it pick the best function. Like for instance, direction? Is it based off of past experiences (trial and error), knowledge about it's surroundings, or both?

Also, what is a neural net. Is it considered multiple solutions to one problem, or one solution?

Another question is, do you determine the amount of error a neural network is producing and use adjust the weights of the neuron (charge) which I think you set as the score?

What is the conversion of amount of error to the amount of correction? Like, when changing the charge, what is the conversion of error units to amount of correction needed.

How does your network realize that collision is suppose to be avoided like, in your cube example? Do you have to setup the properites of the neural network, or will it come to the realization that collision should be avoided?

I know I am asking a lot of questions but, I am very interested in learning about this!

I am trying to make my own model of ANN by using back propagation, fuzzy logic, and genetic Algorithms.

Let me know of any sources that you used to create this project!

Thanx!

GFS

Dark World Website Launched! http://www.darkworldengine.co.nr
[/href]Blog with updates of Dark World Launched! http://www.darkworldengine.blogspot.com
Benji
18
Years of Service
User Offline
Joined: 17th Dec 2005
Location: Mount Doom
Posted: 16th Aug 2007 07:16
ANNE doens't work too well for me. The "smart" cube jerks And doesn't seem to learn too much. Also when I first run the exe the cube just sits in the center and rotates randomly. I only ran it for 10 mins.

The vids look awesome though. Gj.

...
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 16th Aug 2007 17:16 Edited at: 16th Aug 2007 17:23
Quote: "How does it pick the best function."

At first, completely randomly. There is even a possibility (nearly impossible) that no functions will be activated. As ANNE performs well, the neurons that performed well are kept. As ANNE performs poorly, the neurons that performed poorly are replaced. Eventually, only the best performing neurons are retained.

Quote: "what is a neural net."

Take a look back at the first post. There are some pics describing how each neuron works. Each neuron is randomly connected to 4 other neurons, creating a network of neurons. When a neuron has built up enough "charge" to "fire", the neuron sends a "charge" to the 4 connected neurons. These neurons store the "charge" until they are ready to "fire." The "charge" is simply data storage, and "firing" is done by passing that data along.

Quote: "Is it considered multiple solutions to one problem, or one solution?"

Not exactly sure what you mean. The Neural Net will solve as many problems as it is capable of. The size of the net is one of the limiting factors, but not entirely. The score really determines what ANNE will learn or not learn. In the demo, ANNE loses points for collisions, and gains points when not colliding with anything. Also, the more collisions that are occurring, the more points ANNE loses. If more factors were scored, ANNE would learn those more factors. Of course, if ANNE has no way of "sensing" what is being scored, or no way of resolving what is being scored, then ANNE cannot be expected to "learn". For example, ANNE has no sense of the center of the playing field. If ANNE scored points for being within 10 units of the center, and ANNE lost points for being more than 20 units away from the center, it is likely ANNE would not learn anything since ANNE has no way to sense its position on the field.

Quote: "do you determine the amount of error a neural network is producing and use adjust the weights of the neuron (charge) which I think you set as the score?"

The score is determined independent of the neural net. The score is set completely by external factors. In this demo, the score is completely determined by collision. Weighting is used during the "training" of each neuron to help determine if ANNE's performance was significantly better than previous performance or not. The weighting is currently a "ball-park" estimate and is not using statistical calculations, it is close enough to do the job. I need to dig out my statistics books and refresh my memory.

Quote: "What is the conversion of amount of error to the amount of correction? Like, when changing the charge, what is the conversion of error units to amount of correction needed."

This is a BIG question, and I am not sure I completely understand it, so bare with me. First, there are two parts to the neural net that this could apply to; the neural net itself, or the instructors. Instructor neurons attach themselves to random neurons for a given number of Epochs. During this time, the neuron is put through two tests. The first test establishes a baseline performance for the neuron. The second test changes the neuron and records the score for the same period of time. If the first score equals or exceeds the second score (by a given margin for error reduction), the neuron is restored to its original state. Otherwise, the neuron is remains changed. These changes can be drastic or subtle.

Drastic is accomplished by altering the "DNA". The DNA is the whole range of possible settings for each neuron as determined by a random seed from 0 to 10000. As each DNA seed performs, its score is recorded. As time passes, only the best performing DNA is selected (random DNA will always be selected, but only rarely as time passes).

Subtle is accomplished by tweaking each of the neuron's settings. Subtle changes do retain any long-term scoring like DNA.

As far as Error Reduction within the neuron, this is accomplished using a sigmoidal function, which basically converts the input value into a nearly-binary output value. Median values are rare, while the extremes (high/low) values are more common. The sigmoidal function is used when a neuron is firing and is passing a charge on to another neuron.

I will try to find one of my better resources of study. It was an e-book, so I have to search around for it again.

Quote: "The "smart" cube jerks And doesn't seem to learn too much. Also when I first run the exe the cube just sits in the center and rotates randomly. I only ran it for 10 mins."

ANNE needs to run for hours to become "smart."


Open MMORPG: It's your game!
Advancement Games
19
Years of Service
User Offline
Joined: 6th Jan 2005
Location:
Posted: 16th Aug 2007 23:18
This is a great AI demonstration. I wanted to do something similar, except making the bot find its way through a maze and then learn the best route. However, I wanted to make the maze dynamic so it changes from time to time to make the AI adapt. I think I'll toy around with the neural network idea; ANNE has inspired me . What is the next step in this ANNE project?
Gamers for sale
19
Years of Service
User Offline
Joined: 19th Nov 2005
Location: Some where beneath the elements
Posted: 17th Aug 2007 05:55 Edited at: 17th Aug 2007 06:24
Quote: "What is the conversion of amount of error to the amount of correction? Like, when changing the charge, what is the conversion of error units to amount of correction needed."


What I mean is, how do you handle the amount of error (I think in ANNE it is the score) when adjusting the weight applied to the charge?

Lets say you have 0.3 error (on a 0-1 scale), how would you handle the amount of error to make it better? Is there an equation?

Quote: "Is it considered multiple solutions to one problem, or one solution?"


This is more of two part question.

What I mean is, is can you use one neural network or many at once. The other part is, if you have one neural network, then how can you handle multiple problems? Are there ownly a fraction of the neurons working on one problem when there is another fraction working on another?

Is the size of the collection of neurons working on a problem random, or constant?

I didn't understand the whole firing concept. Is it when the data is passed onto another neuron?

Is the charge on the 0 to 1 scale or is it converted to the 0 to 1 scale after recieving input?


Let me see if I am getting this right:

At the beginning of the program the neurons have an input of random numbers.

These numbers are changed depending on how well or not so well they are doing. (Error amount)

The neuron's unsupervized learning randomly trys to correct the error, while supervized learning (INeurons) corrects it the right way. (Closer to the desired result)

The process of supervized learning could be set to different levels by setting them randomly. The INeurons grab 100 random neurons to be checked for training and the ones that are preforming poorly are trained. Then once done, it moves to the next 100. The trained neurons are consided the next generation.

These levels are the max of the Epoch cycles it can reach.


I have a few questions about basically, where is information is coming from. First off, where do you get your score? Is it by the amount of collision, or is there a way that the neural network can figure it out? Where does the input come from? What does it output?

Is there a way to make the neural network as perfect as possible? (Error free) Would you have to adjust something? I was looking for the computer to be able to solve the problem as quickly as possible.

Thanx Again!

GFS

Dark World Website Launched! http://www.darkworldengine.co.nr
[/href]Blog with updates of Dark World Launched! http://www.darkworldengine.blogspot.com
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 17th Aug 2007 16:05
Quote: "What is the next step in this ANNE project?"

Two phases, probably at once, so no updates for a little while.
Phase 1: Update ANNE for more efficient memory use using memblocks.
Phase 2: Update ANNE to allow multiple Neural Nets to operate at once.
I am particularly excited about this second phase because it will allow a new level of "intelligence" to be added to a single entity. Take this simple demo as an example. Instead of all of the bot's functions (input, forward movement, & backward movement) being handled by a single neuron cluster, multiple neuron clusters can be created. Different functionalities can be handled by smaller, more efficient clusters. Or, multiple AI entities could be created for the same program (right now, only one ANNE per program allowed).

Quote: "What I mean is, how do you handle the amount of error (I think in ANNE it is the score) when adjusting the weight applied to the charge?"

The error correction is one of the "genes" within each neuron's "dna". When a charge comes into a neuron, it is filtered through the function below. The Curve value adjusts the number of standard deviations in the distribution model that is the result of the calculation in the function. The Curve value is also one of the "genes" within each neuron that is monitored and modified by the INeurons. End result; Error Correction is a "learned" attribute. Each situation in which the ANNEngine could be placed is different, so the Error Correction would be different as well.



Quote: "What I mean is, is can you use one neural network or many at once."

Currently, only one at a time. When I rewrite ANNE to use memblocks, each memblock will be a single neural net.

Quote: "The other part is, if you have one neural network, then how can you handle multiple problems? Are there only a fraction of the neurons working on one problem when there is another fraction working on another?"

At any given moment, there is no way to tell. The entire neural net works together. Let's say an input neuron passes a charge through a chain of neurons. One of those neurons in the chain is neuron #403. Not only could another chain of neurons pass through #403, but the same chain could loop back around and pass right back through #403.

When neurons link to each other, there are really no restrictions. Initially, the only requirement is that a newly created neuron can only link to existing neurons. So neuron #403 cannot initially link to neuron #557. However, when being "trained", the neuron tries out different links, so at some point, neuron #403 can link to neuron #557.

The can create loops. There is nothing preventing #403 from linking to #557 and #557 from linking to #403 (larger or more complex looping is possible). This can provide a continuous charge within the loop. However, the charge is probably not stable and will either grow to its maximum values, or it will dwindle and fade away.

There are three factors that should minimize the out-of-control charge growth from occurring too often. The first is a Resistance gene within each neuron. When a charge is coming into a neuron, it is divided by the resistance value (which has a minimum setting of 1.0). The second factor is a Dissipation gene that lowers the neuron's stored charge each frame. And the third factor is the sigmoidal calculation. Once an incoming resisted charge falls below the sigmoidal curve's center point, the incoming charge is reduced to nearly 0.

Quote: "Is there a way to make the neural network as perfect as possible? (Error free) Would you have to adjust something?"

Probably not. Learning is a trial and "error" process. ANNE could be far more efficient and learn considerably faster. But for now, computers are too limited.


Open MMORPG: It's your game!
Lourg
20
Years of Service
User Offline
Joined: 20th Dec 2003
Location:
Posted: 23rd Aug 2007 23:07
Its seams my first run of ANNE only wanted to move in one direction. Going to run ANNE all night and see what happens. Just wondering what is the longest that you have had ANNE going? I will post my result.
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 24th Aug 2007 17:12
A little more than 24 hours. But the best results are probably within 4 - 8 hours.


Open MMORPG: It's your game!
dab
20
Years of Service
User Offline
Joined: 22nd Sep 2004
Location: Your Temp Folder!
Posted: 24th Aug 2007 21:29
Can't wait to see more annes running away from each other. Or even better yet, racing on a race track. That'd be kind of cool. Have each one learn the fastest route to completing the race. Even making some aggressive so they start bumping each other's cars.
Diggsey
18
Years of Service
User Offline
Joined: 24th Apr 2006
Location: On this web page.
Posted: 26th Aug 2007 11:21
It would be cool if they learnt to make machine guns, and blow each others cars up...

MadrMan
18
Years of Service
User Offline
Joined: 17th Dec 2005
Location: 0x5AB63C
Posted: 20th Oct 2007 17:29
This died?

i changed this ANNE abit about a month ago, it uses newton now.
also, i made some changes to speed it up
Keys -
Hold space to keep the screen updated, if you dont it warps along at 800 fps.
hold Shift + Mouse to move the camera
Have fun!

And did Riidii make any progress on this?

Attachments

Login to view attachments
Aralox
17
Years of Service
User Offline
Joined: 16th Jan 2007
Location: Melbourne
Posted: 21st Oct 2007 09:00
Whoa, i want to see what becomes of this!
*ticks the mail notify box*

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 6th Jan 2008 17:34
Sorry I have not worked on this. ANNE isn't "dead," but more like in hibernation. Between work and the Open MMORPG project (and now Battle for Middle Earth ), I am finding very little time for side projects. Right now, I am working on a Binding Code for Entities for our RPG game. Binding allows one entity to be Bound to another entity, which will allow for combining, listing, or conditional checks on the results of multiple entities. So, for example, Strength (as an entity) may have an attack value of 5, an attack skill might have an attack value of 3, and a sword might have an attack value of 2. The binding system will allow these values to be totaled up to determine the final attack value total (10 for the example). Or the maximum value could be returned (5 in the example).

I see Binding as a valuable addition to ANNE as well. ANNE had a separate binding array, which limited its flexibility. The new binding system takes advantage of the flexible Infinite State Engine to assign bound Entities. This means that a Neuron within the Neural Network could connect to any number of other neurons within the network, instead of a given number (I think four was the number set previously in ANNE).

MadrMan, I am checking out your version right now.


Open MMORPG: It's your game!
Darth Kiwi
19
Years of Service
User Offline
Joined: 7th Jan 2005
Location: On the brink of insanity.
Posted: 7th Jan 2008 20:04
Thanks for the update RiiDii - just found this thread, and I'm very impressed.

Purely out of interest, if you loaded every word in the dictionary into an array (or something) and then had the user type certain questions into an input box, and had ANNE generate a response of (initially random) words, and the user then scored the result for how much sense it made, could you (after a long long long time) generate a computer that talked to you?

I'm not actually a Kiwi, I just randomly thought it up one day.
n008
17
Years of Service
User Offline
Joined: 18th Apr 2007
Location: Chernarus
Posted: 8th Jan 2008 02:23
^ Ooh, Interesting concept!

I might invest some time into that!

If you did that, with the dictionary, that's what, hundreds of thousands of words? The you divide up the words into groups such as verd, noun, adjective, adverb, etc. Then generate the sentence. The pattern of the sentence is recorded. Depending on how high or how low the user rates the understandability, the computer could ignore the pattern, or change the words.

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 8th Jan 2008 17:00
Quote: "Purely out of interest, if you loaded every word in the dictionary into an array (or something) and then had the user type certain questions into an input box, and had ANNE generate a response of (initially random) words, and the user then scored the result for how much sense it made, could you (after a long long long time) generate a computer that talked to you?"


In theory, yes. An ANNEngine would even appear to understand the definitions; as it was taught, it would learn how to appropriately respond to definitions of words. This would give eventually an ANNEngine the basis for responding to new words/phrases faster, giving a very intelligent feel to ANNE.

The problem. ANNE takes up a lot of memory. Our brains are still 1000's of times more powerful than even the most sophisticated computer. My first attempt at an ANNEngine was ANNE-TM (pronounced "Auntie M"; the TM standing for "Through Memblocks"). ANNE-TM used nibbles instead of bytes to store neural charges and other data instead of 4-byte floats or integers. This meant that most data was stored in roughly 1/8th of the memory used by ANNE. Even with that much compression, we are still a long way off from being able to mimic the learning and memory capabilities of a human mind.

If you think about it, ANNE (or likely any neural net) is really something akin to billions of monkeys pseudo-randomly pounding away on typewriters and eventually ending up with a rather impressive complete works of Shakespeare. Thinking of it backwards, Artificial Neural Nets are modeled after living Neural Nets, which is pretty bothersome if you boil it down to the basics (excluding religious interpretations of the human entity); we don't really "think," we just learn how to put things together in patterns that "survive" and carry on their genes; eventually evolving into things that have the appearance of "thinking."


Open MMORPG: It's your game!
Bizar Guy
19
Years of Service
User Offline
Joined: 20th Apr 2005
Location: Bostonland
Posted: 8th Jan 2008 17:40
You know, it would be a fascinating project to put ANNE on a computer network with the memory needed for it to be the equivalent of the human brain or greater, and see how smart it could become. After all you don't just have to use the recourse's of a single computer. It would be amazing if ANNE could learn to ask questions, and even develop a sense of self. And this seems to be the best way to develop an actual intelligence.

Out of curiosity, you could probably make an ANNE that could learn to code. All it would take is trial and error. Given enough time if you did this, perhaps you could make an ANNE that could learn to modify its own code and test it. You know, the idea that one day computers will be smart enough to make themselves.


Superman wears Chuck Norris PJ's
n008
17
Years of Service
User Offline
Joined: 18th Apr 2007
Location: Chernarus
Posted: 8th Jan 2008 18:32
@RiiDii:

Please leave religious definitions out of this.

Thought it the responce to stimulii. Neurons do think.

What you are looking for is complex patters that match humans -- Not easily acheived through a computer, because virtual Neurons do not recieve all of the same same stimuli that humans do.

Darth Kiwi
19
Years of Service
User Offline
Joined: 7th Jan 2005
Location: On the brink of insanity.
Posted: 8th Jan 2008 23:42
Quote: "You know, it would be a fascinating project to put ANNE on a computer network with the memory needed for it to be the equivalent of the human brain or greater, and see how smart it could become."


"L-l-look at you, hacker. A p- A pathetic creature of meat and bone. Panting and sweating as you run through my corridors. How can you challenge a perfect, immortal machiiiiiiiine?"

It is a pity ANNE probably can't get much more powerful than your previous cube-space-finding example due to technological limitations. Thinking totally crackpot for a moment, maybe scientists could code biological computers to effectively do what ANNE does? Maybe they could build a working brain at some (far off) point in the future.

I know it's wacky. It's late. I need sleeeeep

I'm not actually a Kiwi, I just randomly thought it up one day.
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 9th Jan 2008 06:00
Quote: "@RiiDii:

Please leave religious definitions out of this."

I did. I specifically excluded it. Specifically so hopefully no one would include it.

The uses for ANNE are plenty and mostly fanciful. My primary goal is to allow for a new evolution of AI opponents in games. I'm kind of tired of figuring out how the pre-programmed AI's will behave and simply overcoming that AI with a cheesy solution. Battle for Middle Earth, for example: Like so many RTS's, the solution to beating the potential build up of hoards of enemies is to leave your castle's gates wide open (except when defending Helm's Deep, that's a battle of attrition). The AI thinks it has a way in, so it tries to send forces through immediately. Gates are nice narrow openings, so you can position enough archer forces around and whittle down the enemy as they filter through. And the AI will do this over and over and over again.

Should you close the gates, the AI will build up outside and attempt to find a way in. The enemy's way in is not always where you want it, and not always conveniently filtered right in front of your archers. So, this is clearly the best way to attack the castle, even when the gates are wide open. But most AI today won't ever figure that out. A program like ANNE could figure that out. Or, at the very least, an ANNE program would be a little more creative and might surprise the player on occassion.


Open MMORPG: It's your game!
Darth Kiwi
19
Years of Service
User Offline
Joined: 7th Jan 2005
Location: On the brink of insanity.
Posted: 9th Jan 2008 10:24
That does sound good. You could run ANNE a few times as a developer until it reaches a semi-intelligent state where it has a basic grasp of RTS tactics. Then you could save that ANNE. Then you could run it further and further, after playing against tougher and tougher human opponents. Eventually you'd have a super-intelligent ANNE. So you've now got EASY and HARD AIs for your game. Then you can just load in the saved EASY or HARD AIs at the start of a match, and have ANNE learn from those pre-requisites.

I'm not actually a Kiwi, I just randomly thought it up one day.
MadrMan
18
Years of Service
User Offline
Joined: 17th Dec 2005
Location: 0x5AB63C
Posted: 9th Jan 2008 19:43
Good to see you're still alive Riidii

Did you try my version of ANNE? (or did anyone else?)
I'm not sure if it gets any more intelligent if you leave it on for while. it seems to just be quite when its not touching anything and when it does it just goes and tries to find a way away from the cube. but it does that at epoch 0 as much as at epoch 1000. its strange.
I have noticed however that sometimes alot of the actions seem to contain the 'jump' action so it just keeps flying up all the time. which is quite smart in a way...

anyway good to see you've not abandoned the project, but just put it on a (long) 'pause'.
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 12th Jan 2008 07:52
Quote: "Good to see you're still alive Riidii"

Me too!

Quote: "Did you try my version of ANNE?"

Yes I did. I get pretty much the same results. Because there isn't much for ANNE to run into, it is hard to get good score to represent how well ANNE is doing. In general, ANNE's training should be set up so that ANNE can only get a good score about 20% of the time (or less). Any more than that, and ANNE will get too many false-positive results. That is to say, ANNE will get a good score for a bad neural set-up. Inversely, ANNE's training shouldn't be too difficult or impossible to "win." For example, if ANNE is set up with a scenario where there is only one right answer out of thousands of possible options, it is really unlikely that ANNE will stumble across that solution. And if ANNE does, there may be too many other factors preventing ANNE from retaining that neural structure for very long.

I am going to set up a simple maze for ANNE in which ANNE will receive points for getting as close to the exit as possible in as short a time as possible. The maze will allow for multiple routes to the exit (instead of the typical single route). ANNE should be able to construct a neural network that allows navigation through the maze and end up close or at the exit.


Open MMORPG: It's your game!
Darth Kiwi
19
Years of Service
User Offline
Joined: 7th Jan 2005
Location: On the brink of insanity.
Posted: 12th Jan 2008 13:08
The maze idea sound great. Sort of like the old Rat-In-A-Maze experiment.

I'm not actually a Kiwi, I just randomly thought it up one day.
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 16th Jan 2008 09:42 Edited at: 20th Jan 2008 22:04
Here's the "maze". I didn't put a lot of work into the maze, but I did give ANNE some new challenges. There is no collision. Instead, think of the maze as one that the rat can cheat at and climb over the walls. This still is "unpleasant" to ANNE and costs points, so avoiding walls is better. ANNE loses points for being on the Left or Bottom, and gains points for being on the Right or Top. The most points are gained at the Top-Right of the "maze." I also added a "compass" to ANNE, so ANNE knows direction. ANNE still does not have a sense of location.

I have also updated ANNE to use Ian M's text dll commands (Matrix1Util_16.dll). This prevents ANNE from crashing from the text manipulation that was being done manually. An "Evolution" counter has been added. Each Evolution is (set at) 1000 Epochs. Each Epoch is a given number of frames in which ANNE has to learn. Each Epoch, ANNE starts over from the lower left corner of the maze again. Each Evolution, ANNE is reset. The "DNA" array is saved and loaded. This helps ANNE select the best high-level settings based past ANNE's performances. Like parents passing on DNA to children. After several Evolutions, ANNE will start with the best Neuron DNA, but not necessarily with the best settings, or in the best positions. ANNE will learn quicker though.

All in all, I was impressed with the results. I let ANNE run for about 24 hours. At about 20 Evolutions, I watche ANNE from the start of a new Evolution. By Epoch 100, ANNE was able to regularly achieve high scores. What I didn't expect was to see a decent combination of a sense of direction and object avoidance. I figured the points for positioning on the board would override collision too much. While ANNE clearly held positioning over collision, it was clear that both were factors in ANNE's decision making.

What else demonstrated "intelligence" was that the "compass" I added to ANNE was a simple Object Angle Y() value plugged into 8 neurons. If just the Angle caused a difference in ANNE's behavior, ANNE would always point towards 0 or 360 degrees, and not likely anywhere in between. But it was clear that ANNE developed a preference for about 40 degrees, pointing to the Upper Right of the maze.

Another funny thing was that, often, ANNE would travel around the maze a bit, but when ANNE reached the Upper Right corner, ANNE had a tendency to stay put. Somehow, ANNE had learned to stay pointed at about 40 degrees when this position was reached. Not entirely sure how though, since ANNE does not receive x/y coordinates as input. The best thing I can figure out is that ANNE learned to "recognize" the Upper Right corner by the maze walls, and stay put for the best score. Much better AI than expected.

Anyway, attached are all the files. You need Ian M's text dll commands (Matrix1Util_16.dll) if you want to play with the code. I have also included the DNA file from the above mentioned run. Feel free to delete it and start your own ANNE over.

If anyone wants to add collision and/or add different maze walls, feel free. I would like to see how ANNE does with that as well.


Open MMORPG: It's your game!

Attachments

Login to view attachments
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 20th Jan 2008 22:03 Edited at: 20th Jan 2008 22:05
Found a few bugs, including a fairly major one that prevented ANNE from learning as well as possible. Here is the update. ANNE neural net has been increased to 10,000+ neurons. This makes ANNE a bit slow though. ANNE now has 2000 instructor neurons to train those 10k neurons, as well as 2 additional links per neuron(6 instead of only 4).

I am now working on a variation maze for ANNE to go through; instead of avoiding the objects, ANNE must seek them out for points. After this test, I might make an ANNE that can follow a moving target. Then maybe Tank ANNE that gets points for seek-and-destroy.

The attachment for the updated ANNE is in the previous post.


Open MMORPG: It's your game!
Bizar Guy
19
Years of Service
User Offline
Joined: 20th Apr 2005
Location: Bostonland
Posted: 20th Jan 2008 22:12
Sounds amazing. Downloading.


Superman wears Chuck Norris PJ's
danielp
User Banned
Posted: 1st Feb 2008 23:37
Hi RiiDii,

I just found this thread and have been running your original ANNE EXE, it was averaging about 3K @ Epoch 100-120.

I was wondering if you still wanted testers, I would be glad to run it for you and post some results (I don't have DBPro, so it would need to be an EXE).

Please don't forget about this amazing project

danielp

danielp
Email - thegamecreators@danielp.e4ward.com
My Specs - 2047MB RAM | P4 3.4GHz | XP 5.1.2600 SP2 | GeForce 6800 256MB | Dell 230310 1600x1200 34x27cm
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 2nd Feb 2008 20:30
Thanks Bizar Guy & danielp.

I have not forgot about it. I am trying a much larger net, so my testing is going much slower. I might reduce the size back down if there is no significant improvement. Current test is ANNE running around and collecting cups of coffee. I decided on coffee because the ANNE sometimes shakes - looks like the jitters. Anyway, Still seems to be random results so more testing before the next upload. And I will post the executable file as well.


Open MMORPG: It's your game!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 2nd Mar 2008 17:11
I am working on an ANNE V3 now, which will make significant improvements on memory usage. After mulling this over, and more research, it occurred to me that each neuron in ANNE contains its own unique equivalent to a chemical filter. That is, the structure for the filter was stored with each neuron.

An alternate way of accomplishing the same task is to develop a set of filters (similar to chemicals in a brain) that the neurons pass the data through as the data travels from neuron to neuron. So, instead of each of 10,000+ neurons storing their own unique filters, there would be, say, 10 filters for the entire neural net regardless of the net's size.

Each filter contains 5 data-points, each 4 bytes in size. This is 20 bytes per filter. With 10k neurons storing individual filters, this resulted in 200kb of memory required just for the filters. However, by having 10 "chemical" filters, the memory use is only 200 bytes; and the savings is exponential.

Finally, by isolating the filters, ANNE's learning functionality can be split into two factors:

1) The neurons can focus on which chemicals to bond to instead of the might-as-well-be-infinite possible combinations of filters in ANNE V2. This will speed up the learning process considerably.

2) The chemicals can be evolved separate of the neuron learning process. Previously, each neuron was trying to learn and evolve at the same time. This resulted in evolved neurons needing to re-learn after evolving. For V3, the evolution of a filter will not be extreme enough for a neuron to have to re-learn which filter to bond to.

I have also found a few other ways to optimize performance, so ANNE V3 will be more sophisticated as well as faster.


Open MMORPG: It's your game!
danielp
User Banned
Posted: 2nd Mar 2008 21:20
Let me know when you need something tested out

danielp

danielp
Email - thegamecreators@danielp.e4ward.com
My Specs - 2047MB RAM | P4 3.4GHz | XP 5.1.2600 SP2 | GeForce 6800 256MB | Dell 230310 1600x1200 34x27cm
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 13th Apr 2008 18:11
Bumping. I am starting on ANNE V3 today after fixing some of the background library codes.

Some updates will include:

Using memory locations to store data instead of arrays;
Arrays will still be used, but a lot more of the data storage will be in memory and should speed things up allowing for a larger net,
Inactive neurons;
Neurons that are not active will not process the same functions as active neurons, which will also save on resources.
"Chemical" based data transmission;
Instead of each neuron storing its own filters for receiving data, data will now pass through a "chemical" function, which will significantly decrease data storage and improve learning capabilities by altering fewer chemical patterns.
Neuron Clusters;
Neurons clusters will allow for smaller neuron groups to achieve a specified result. One lesson learned from previous ANNEs is that if the NNet size is too large, the learning process slows down or stops. By clustering, each cluster can learn fairly quickly. Structuring the clusters will allow the clusters to coordinate the learning, allowing for development stages where the AI learns to use parts before moving on and learning to coordinate the parts.

An example of using neuron clusters:
Input clusters (commonly eyes): Can be initially developed to receive external input and pass data out from the cluster. This cluster starts with a learning program that is scored positively if input is received, data sent out, and the data level relative to other sensory clusters.

Output clusters (legs, wheels, etc.): Receive data transmissions from other clusters, which should result in movement. The cluster is scored if the movement is in the appropriate direction and at a speed relative to other clusters (with some min/max thresholds).

Processing clusters (central nervous system): This is the last cluster to learn. Once the AI "knows" how to see and move, those clusters are "locked down" and the processing clusters go to work on achieving ANNE's goals as before.

Clusters can also be arranged into separate NNets, allowing for multiple AI's. It would be possible for the output of one AI NNet to interact with another AI NNet, resulting in "communication". This would require a special output cluster, but is something I will look into.


Open MMORPG: It's your game!
danielp
User Banned
Posted: 14th Apr 2008 10:24
Thanks for keeping us updated.

danielp

danielp
Email - thegamecreators@danielp.e4ward.com
My Specs - 2047MB RAM | P4 3.4GHz | XP 5.1.2600 SP2 | GeForce 6800 256MB | Dell 230310 1600x1200 34x27cm
Diggsey
18
Years of Service
User Offline
Joined: 24th Apr 2006
Location: On this web page.
Posted: 14th Apr 2008 19:40
Perhaps you should move ANNE to the GDK It's is much more capable for this kind of thing! If you don't mind, I'm going to have a go at making a neural net too

RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 14th Apr 2008 23:34
Quote: "Thanks for keeping us updated."

Sure thing. Now that I am mostly done with a little game I wrote (Collision Course over in the Program Announcements board), I will be working on ANNE V3 a lot more. I hope to have something to upload by this coming weekend, or the next weekend - depending on how my schedule goes.

Quote: "Perhaps you should move ANNE to the GDK"

A very tempting thought. I am currently killing two birds with one stone. While I am writing ANNE V3, I am also stress-testing our OMMORPG library. The library is currently very capable dynamically assigning variables and functions to "objects" (objects being any assigned index value, not necessarily 3D objects). This basically mimics some of the functionality of OOP.

Quote: "If you don't mind, I'm going to have a go at making a neural net too"

Not only do I not mind, but I welcome it! I can see having NNAI contests, which can be on several levels:

Trainer Contest: Who can train a specific NNAI to do verious tasks or achieve various goals. The trainer would basically code the environment and scoring system for and AI, but not much more than that.

Library Coder Contest: Takes an existing NNAI library (complete with NNAI core functions) and codes an AI using the library commands without altering the core-library functions.

Core Coder Contest: Develops a new NNAI core-library.


Open MMORPG: It's your game!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 4th May 2008 10:46
Finally. I have a working base cluster library. The library does not learn at this time, but does show that this new library works using the dynamically assigned "chemical" formulas. I have also flipped the neuron firing around so that as each neuron fires, it recursively fires the neurons that are linked to it. The previous incarnations of ANNE tested all the neurons for firing sequentially, regardless if the neuron received a charge or not. Now, neurons are both randomly selected for fire testing, as well as by connectivity linking. Finally, neurons in previous incarnations were only positively charged and had a positive threshold. Now neurons can be negatively charged and can contain a negative threshold (meaning they can discharge either for positive charges, or negative charges, but a single neuron cannot do both).

Attached is the test code. Not much to look at,but simply show some input/output results for the cluster. The next phase will be training the cluster.


Open MMORPG: It's your game!

Attachments

Login to view attachments
McLaine
18
Years of Service
User Offline
Joined: 20th Feb 2006
Location:
Posted: 17th Jul 2008 01:03
Any more news RiiDii?

I've been following this one with great interest.

It's not my fault!
Demonware
16
Years of Service
User Offline
Joined: 19th Jul 2008
Location:
Posted: 19th Jul 2008 21:59
sounds terribily complicated, but its still cool
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 19th Jul 2008 22:17
Nothing yet. I have been really tied up with work as of late. A new quarter is starting, so I might have some time now.


Open MMORPG: It's your game!
RiiDii
19
Years of Service
User Offline
Joined: 20th Jan 2005
Location: Inatincan
Posted: 20th Jul 2008 23:38
Open for discussion and theories:

Currently, ANNE is based off of a stack of neurons that fire and trigger other neurons. There have been various permutations of this concept, but each ANNE has used this foundation. I am trying to think of a true "next gen" neural net. My current thoughts are along the following lines.

Starting with our Dynamic Function Engine (DFE), which basically creates an ID value for each function within the code and calls the functions from a Select/Case statement based on that ID. This allows the code create a stack functions and call the functions in order that they were placed onto the stack. Functions can be added to, or removed from, a stack. Additionally, their can be more than one stack of functions.

Next, I am considering tossing in our Infinite State Engine (ISE), which is a similar concept for variables. Each "variable" within the ISE is designated by a Group ID and a State ID. The original concept for this design was to be able to create a dynamic variable list for any entity. For example; in an RPG, a creature could have characteristics like strength, dexterity, intelligence, health, and so on. The creature could also have skills, such as attack, defense, hide, archery, and so on. Using the ISE, the creature would be the Group (normally using an Object ID) and the characteristics and skills would be the States. So, to determine a creatures health, simply reference the Object ID of the creature and the State ID for health.

So here is what I am thinking...

The Neural Net consists of "smart" functions that can perform a variety of tasks. The primary tasks the functions can do is to add functions to a Function Stack. The second thing these functions would do is update their own variables using the ISE. The ISE variable would primarily track each function's performance, but can also be used to hold other data, such as Function ID's for functions to be added, or data to be passed on the the next function being called. Finally, functions that are not performing well can remove themselves from a function stack.

Of course, there would be task-specific functions as well, such as input and output functions (sensors and movers). All this is wrapped up with an epoch system which reads in retains a score, and allows the functions to read the score to determine how well they are doing for any given period of time.


Basically, think of this concept as Code that can re-write itself and evolves by reducing code and/or arrangements that do not perform well and retaining code and/or arrangements that do perform well. The code is limited to the function-list.

Now the obstacle is; what would these functions be?


Open MMORPG: It's your game!

Login to post a reply

Server time is: 2024-11-24 11:28:58
Your offset time is: 2024-11-24 11:28:58