Sorry your browser is not supported!

You are using an outdated browser that does not support modern web technologies, in order to use this site please update to a new browser.

Browsers supported include Chrome, FireFox, Safari, Opera, Internet Explorer 10+ or Microsoft Edge.

Code Snippets / Concept Bot

Author
Message
lower logic
18
Years of Service
User Offline
Joined: 15th Jun 2006
Location:
Posted: 8th Feb 2007 05:39 Edited at: 8th Feb 2007 05:44
This is a snippet that is an implementation of an idea I had. I think, to have a believable and realistic chatbot of any kind the program would need to know relations between various ideas, concepts, words, and other things.

This program makes links between various things and you can tell it statements in the form "{noun} {verb} {noun}" and can ask it to tell you what it knows about something by saying "what is {noun}?" A more complete implementation would have modifiers in the links, so you could say things like "a week has seven days". Don't mix plural and singular, it's not that smart at language parsing.

Instead of trying to make the program appear to chat realisticly by parsing complicated grammar structures and responding with witty responses, I've tried to make a chatbot that can make the connections between various ideas and, in a way, think.

A big thing missing from this snippet that would be tricky to program is the idea of instances of various concepts. For example, it knows that "self" has a "name", but since "self" is defined as a concept (like entity and time, etc) it is not an instance definition, and I cannot give it, the program, a particular name.

If the program had instance support, you'd be able to say things like "I have an animal named spot" and the program would be able to ask "is spot a specific type of animal?",and you could say "he is a dog".


I've given it a good 120 or so definitions so it already knows some stuff such as word, number, image, state, time, day, computer, network, sum, set, entity, and lots of other things.




Here's an example of it responding to questions and learning what a keyboard is. I only type the lines that start with ">".

The ultimate goal would be to get the program to use its knowledge database to converse in a natural way. Also it could be much smarter if it could read a dictionary to expand its knowledge.
Kohaku
20
Years of Service
User Offline
Joined: 3rd May 2004
Location: The not very United Kingdom
Posted: 8th Feb 2007 12:00
That's neat. You could also get it to run a search on things that it doesn't know about via the internet with a long list of words and names. After a while it would become all knowing!

That's my AI plan anyway.


You are not alone.
Vampiric
18
Years of Service
User Offline
Joined: 30th Oct 2006
Location:
Posted: 8th Feb 2007 18:03
Simple code but with a very complicated looking outcome. Great work.

[quickly get's out of school so he can compile the code]

Computer says n00bed
MadrMan
18
Years of Service
User Offline
Joined: 17th Dec 2005
Location: 0x5AB63C
Posted: 8th Feb 2007 19:53
looks abit like the bot i made for the challenge threat, it can learn about objects too. this looks good too btw


lower logic
18
Years of Service
User Offline
Joined: 15th Jun 2006
Location:
Posted: 8th Feb 2007 20:32 Edited at: 8th Feb 2007 20:36
Thanks for the comments

MadrMan: I had the idea of making a bot map out knowledge for a while in the back of my mind and seeing your learning bot convinced me that the idea could be possible. It gave me the urge to finally try to make my own bot. Unlike yours though, I wanted to make a bot to be able to make connections between various words.


I'm thinking of working a bit more on this bot by adding more functionality that I had in mind.

First, adding link modifiers and object modifiers. This would enable you to refine the relations between concepts. For example, you'd be able to say "a door sometimes is-made-of wood" where 'door' is the subject, 'is-made-of' describes the relation, 'wood' is the object, and 'sometimes' modifies the relation.

If the modifiers end up always being words like "sometimes","always","usually","never","often","not" etc., we could assign percentages (i.e. always=1.0, never=0.0,"sometimes"=0.5,"usually"=0.7, etc).

With percentages, once instances are added, you could say "I have a door", the computer would see that it is not 100% sure what this particular instance of door is made of, so it would ask "what door is-made-of?" which would idealy be translated to normal sounding english before responding to the user.

Another example of these relation modifiers with instances would be the concept "a dog usually has a name", and if you said "I have a dog" it would see that the dog has a high chance of having a name, so it would want to complete its knowledge of the dog by asking "what dog has name?" which in normal english would be "what is the name of the dog?".

Also, with modifiers on the objects, you'd be able to tell it things like:
"a week always has seven day"
"a car has four wheel"
"a year usually has 365 day"
"a year sometimes has 366 day"
"a leap year always has 366 day"
"a language has many word"

In all these cases, the object modifier indicates either a qualitative or quantitative amount.

Modifiers that have "of" or "which" would be harder to implement. For example, if you wanted to say a computer network is a network of many computers it would need to parse something like:
"a computer-network is a network of many computer", or
"a computer-network is a network which has many computer"

In these cases network, computer-network, and computer are all still abstract concepts because you're not talking about a particular network or computer-network or computer, but since you want to store the knowledge that a computer-network is a particular _type_ of network, the computer-network would have to relate/link to a semi-concrete instance of a network. In this case the semi-concrete-semi-abstract network would be a semi-concrete-semi-abstract system, which it knows with "network is system", and it would be able to know "a system has many entity" using the previous object modifier, but we'd need a way to specify that the entities, in this case, are to be downcasted to a particular type of entity called "computer".
Milkman
18
Years of Service
User Offline
Joined: 30th Nov 2005
Location: United States
Posted: 9th Feb 2007 00:57
This is cool, I was actually planning on doing something similar to this at one point. Maybe if I find some spare time down the road I'll give it a try. Keep this up, I want to see what you can do with it. Good work so far.

Who needs a signature?
Code Dragon
18
Years of Service
User Offline
Joined: 21st Aug 2006
Location: Everywhere
Posted: 9th Feb 2007 16:16 Edited at: 10th Feb 2007 14:36
Very interseting, especially that it can learn new things and know what itself is. The one problem that I think AI will never overcome is self consiousness. Robots think "self" is just another entity, they'll never understand that "self" is the process that they are.

I remember seeing a robot on the show 2057, it couldn't learn new things so this happened:

Quote: "
Guy: (points at the floor) Go where I am pointing.
Robot: I'm not sure where you're pointing. I know you. Pascal, Hello.
Guy: Hello. Go where I am pointing.
Robot: I'm not sure where you're pointing. Let's shake hands.
Guy: Go where I am pointing.
Robot: I'm not sure where you're pointing.
"


Of course, the robot can see the floor, but I think that it thought "I am pointing" is a place. It had no idea what "pointing" means, so that's probably why it said "I'm not sure where you're pointing." It's very important for an AI to learn, that's what keeps them multi-functional. I can't wait for the day that robots will do all the chores.

lower logic
18
Years of Service
User Offline
Joined: 15th Jun 2006
Location:
Posted: 10th Feb 2007 00:30 Edited at: 10th Feb 2007 00:31
Quote: "Robots think "self" is just another entity, they'll never understand that "self" is the process that they are."


True. If the program supported instances, it would be able to learn that both self and user were instances of intelligent-beings. If it had some special code to refer to self and "I" and know the user refered to self as "you", and to refer to the user as "you" and the user refered to himself as "I", it would seem close to being self-aware within the limits of this framework because you could ask "do you understand?" and the bot could say "Yes. I understand." amoung other things. It could also ask infinitely many questions aimed at filling in holes in its knowledge. I.E. consider this pseudo conversation:

user: I'm a person
(robot links person to human, and human to house)
robot: so you live in a house?
user: yes
(robot links house to location)
robot: where is this house?
user: in boston?
(robot does not understand the meaning of boston. puts "boston" into immediate_topic)
robot: what is 'boston'?
user: it is the name of a city
(robot replaces "it" with immediate_topic, "boston")
robot *somehow* figures out that boston is an instance of city and not a new class that links to city or a subclass of city.
(robot links city to size)
robot: is boston a large city?
user: yes
(robot links city to location)
robot: where is boston?
user: you are in boston
(robot update's self.location attribute to match this information. robot sees boston.location is still unknown)
robot: but where is boston?
user: it is in the united states?
(robot links united states to country)
(robot find another next empty attribute in user instance, robot sees user is an instance of human and links humans to friends)
robot: I see. Do you have any friends?
etc.

As you can see, with the proper coding, a conversation like this could go on forever since just about anything could link to anything else, and while in this sort of program the robot can't have memories or feelings, it would be able to have a good conversation with the user.

But to be self-aware it would need to understand that what it said could alter the knowledge and/or mood of the user, which would be hard to fit in to this model. Also it the program would need a way to have memories and feelings, which might possible be roughly modeled as attributes and links to stories.
Code Dragon
18
Years of Service
User Offline
Joined: 21st Aug 2006
Location: Everywhere
Posted: 10th Feb 2007 14:34 Edited at: 10th Feb 2007 18:24
Yeah feelings would be hard to implement. Of course we can't give them real feelings, but thousands of if statements could simulate it, then have some routine that controls facial expressions and tone of voice.

if gethurt = 1 then inc mad, .2
if getgift = 1 then inc happy, .4

if happy > .5 then smile()
if funny > .4 then laugh()
if scared > .6 then runaway()

Another thing you can simulate but never is truly real create is free will. I've seen some programmers have implemented free will into their AIs, but it's nothing but fake will. It doesn't think, it just follows programming. Can't get much closer to being a slave than that.

Brain111
17
Years of Service
User Offline
Joined: 5th Feb 2007
Location: In my own little world.
Posted: 16th Feb 2007 01:51
This is very interesting. One thing that would be cool if you added was the ability to make it save everything it learns somehow, and then call it back up the next time you run it. That way, if you talked to it every day, it could learn lots of stuff eventually. You would probalby have to bump up the max words significantly, though.
lower logic
18
Years of Service
User Offline
Joined: 15th Jun 2006
Location:
Posted: 16th Feb 2007 05:03 Edited at: 16th Feb 2007 05:07
After doing some searching on the topic I found there's already a lot of stuff on this:
http://en.wikipedia.org/wiki/Ontology_(computer_science)
http://en.wikipedia.org/wiki/Semantic_network

Anyone who thought this snippet was neat should really check out these projects:
http://web.media.mit.edu/~hugo/conceptnet/
http://pi7.fernuni-hagen.de/forschung/multinet/multinet_en.html
http://wordnet.princeton.edu/
jasonhtml
20
Years of Service
User Offline
Joined: 20th Mar 2004
Location: OC, California, USA
Posted: 16th Feb 2007 05:47 Edited at: 16th Feb 2007 05:49
OMG! i was just doing research on this stuff just 2 weeks ago. i already designed a VERY similar system (i just havent made it yet)! you beat me to it, nice work! i guess great minds think alike, eh?

aticper
18
Years of Service
User Offline
Joined: 27th Jan 2006
Location:
Posted: 4th Mar 2007 19:26
I'm going to have to disagree with the statements that

A. AI's cannot have free will

and

B. AI's cannot be self aware.


As for the free will thing, if your definition of a lack of free will is that it cannot do anything that it's "program" doesn't allow, then we humans have no free will either. All a person is is a computer running a program and reacting with its environemnt. We can't do anything that the program running on our brains doesn't allow.

As for the self awareness thing, in order for a program to be self aware, it has to be self-referencing (i.e. it has to take its internal state as an input). It also has to have a complex calssification system in order to be able to 'conciously' reference to a set of concepts and work them into its worldview.



That said,
It looks like a very nice app, and I'm curius to see what progress is made on it!

I'm not paranoid. Stop thinking that I'm paranoid!
Ankillito
17
Years of Service
User Offline
Joined: 10th Dec 2006
Location: Litigious California
Posted: 4th Mar 2007 19:37
Is Lower Logic still active? I haven't seen him post anything in almost a year.

"There will always be evil, for, without evil, the good shall lose their virtue."
lower logic
18
Years of Service
User Offline
Joined: 15th Jun 2006
Location:
Posted: 4th Mar 2007 22:12 Edited at: 4th Mar 2007 22:13
I'm still alive, I'm learning C++/OpenGL and using Linux now, so I won't be able to program in DarkBasicPro anymore. I'm glad people found my snippets interesting.
Milkman
18
Years of Service
User Offline
Joined: 30th Nov 2005
Location: United States
Posted: 4th Mar 2007 22:13
Quote: " Lower Logic Posted: 15th Feb 2007 "


Who needs a signature?
Ankillito
17
Years of Service
User Offline
Joined: 10th Dec 2006
Location: Litigious California
Posted: 5th Mar 2007 00:31
Oh. We'll miss you Lower Logic, even if I could never figure out any of your programs....

"There will always be evil, for, without evil, the good shall lose their virtue."
Code Dragon
18
Years of Service
User Offline
Joined: 21st Aug 2006
Location: Everywhere
Posted: 10th Mar 2007 20:12 Edited at: 10th Mar 2007 20:30
Quote: "it has to take its internal state as an input"


It's internal state would be its memory, wouldn't it? But memory is like a separate entity to a program, it has to go out and access it. With people the memory is within us, we access it internally. The memory isn't in the program, in fact, the program is in the memory. AI is nonliving, and I don't think it will ever "awaken" and be like a human with a robot body. I supposed having data on itself is self awarness, in a sense. Computers are made up of uncousious gates, so a network of them isn't consious either. I know the brain is no different, but there's no denying that there's one, whole, true perception in the body: consiousness. I don't see how computers will ever be one whole unit, people design them to create the illusion of functioning as one entity.

I dunno. It's like the earth before life. You couldn't experience the world from a rock's perpective because there is no perpective there. You also cannot percieve the rock yet because your perception doesn't exist yet. It exists, yet nothing and nobody knows it. This is how many people think dead feels like, you don't feel anything at all. Now this is my personal belief (got into an "argument" in another thread about it), but I believe people are seperate but connected from their bodies, so you can think even when you're dead. Even though there's no real proof I belive what I believe because couldn't live with the scary thought that after people die...never mind it's too horrible to think about.

Quote: "All a person is is a computer running a program and reacting with its environemnt. We can't do anything that the program running on our brains doesn't allow."


I'm going to have to dissagree with that, I can show you about 500 counterexamples if I had the time. (About all the famous people in history would do it) The brain handles unconsious behavors in a programmed way, but you as a person, not a brain, are always free to do anything you want. The sky's the limit. The reason people "can't" do some things isn't because they don't have what it takes, it's that most people get "programmed" by others to belive they can't do it. They live reactively to keep themseleves safe, always insiting that they're just another turd in the hurd, and that they can't change their personality. Any attempt to get rich or be happyier than other people is evil. But if you look at famous people they'll tell you that with lots of hard work (opposite of what social programming allows) you can get to the top. Nothing can reprogram people except themseleves, but most do it without even knowing it.

I do agree to some degree, though. On a very low level, people respond to incentives. The reasons people do what they do is that there's something in it for them. But nobody's forcing them to do anything, even though you'd be crazy (or too rich to bother) to pick up a $100 you saw on the ground you're always free to choose. Maybe it's because we can experience happiness on a diffent level than computers can. We want happiness by definition, so we work for it. I don't know, this is stuff that can't really be proven or disproven.

Do not meddle in the affairs of dragons...for you are crunchy and good with ketchup.

Login to post a reply

Server time is: 2024-11-23 00:23:09
Your offset time is: 2024-11-23 00:23:09