scribbles on AI

Some random ideas and stuff on AI, ALife, Cognitive Science, AGI, Philosophy, etc
Recent Tweets @inf0_warri0r
Who I Follow
25 posts!

25 posts!

image

Consciousness is mostly considered (at least considered in the materialistic view) as something that resides or contained within the boundary of someones body. It perceives using our senses like vision, hearing or touch and maintains the agency through the organs like Limb. But since man started to build tools these boundaries of consciousness may have been stretched a little further than the human body.

Although the article is about the consciousness being extended out from the body, I’m not trying to say that the consciousness is a nonphysical entity. The idea, which I’m trying to build here doesn’t concern with the fact that consciousness is physical or not. The idea that the boundary that separates a human being and the rest of the world can expand through the process if extending consciousness.

Every action of a human isn’t conscious. On one hand, they do tasks like breathing completely unconsciously. On the other hand a task we do consciously can become unconscious when we completely used to it. The extension of the consciousness which I’m going to talk about is caused by this process of conscious tasks becoming unconscious.

I’d like to divide these extensions of the consciousness into two parts. They are extensions of awareness and extensions of the agency.  The extension of the awareness is the ways we modified our perception. For an example, let’s say that you are playing a game on a computer. When you keep on engaging in the activity for a long time in one point you perception of the actual world get faded away (this is actually happening because of the selective attention). At that moment, you only perceive the virtual world. Its like your perception is extended (though in this case it is more like transcended).  Another kind of a way that the extension of awareness happens (or may be the better way of it happens) can be explained by the use of some of the wearable technologies. For an example, let’s think about Google Glass. Google Glass modifies your visual perception by adding a new layer of information for it. And when someone wears it for a long time the feeling of wearing something will fade (just like in the above example) and the additional information added by the Glass will become a part of the our unified perception. And in a way it will be like our awareness is extended.

The extension of agency happens with the use of tools or devices in tasks which involves interacting with the world. When we start using a tool (even simple tools like fork, hammer or scissors) we direct our consciousness towards the tool that we are using and also towards the external world (the work) we are doing). But when we use that tool and do the same task continuously for a long time the conscious control of the tool becomes unconscious. And the consciousness becomes more or completely directed towards the task that we do with the tool. That makes the tool a temporary extension of ourselves. Or another way of explaining it is that first we have the conscious agency of towards the tool and the tool has the control over the world. But now we have direct conscious agency in the world because the interaction between us and the tool become unconscious. For an example, let say we are driving a car. Although we start a car consciously when we keep driving a car the conscious control, we have over the car transform into unconscious control. This is actually necessary to be able to easily drive the car since paying (conscious) attention on both controls of the car and the road is hard. Try imagining if you had to watch the road and the same time consciously deciding that you have to turn, making a conscious decision on how many degrees you have to turn the steering wheel and then consciously turning the steering wheel (may be you do like this when you practice driving). It is also risky, since you cannot react quickly in an emergency situation. But when controlling the car become unconscious your consciousness is only directed towards the road. And that makes the car temporally an extension of yourself.

Most of the times these two extensions comes together. For an example, let’s say we are using a computer (again). As I mentioned in the extension of the awareness part, when we keep directing our attention towards the computer screen and the sound of the computer, momentarily the world or the reality we perceive transform form actual world to virtual world on the computer. And also when we use the controls (mouse, keyboard) of the computer our agency also transform in the same way as I mentioned in the extension of the agency part. But there is a small difference in the computer and the car or the Google glass examples. The computer changes the world we are conscious of from the real world to a virtual world. But the car or the Google glass doesn’t change the world, it just modifies the way we see or/and control it. But all of these examples are similar in the sense that all of these examples level up our consciousness or stretch the boundary of our consciousness wider. But this doesn’t happen all the time either. I mean we doesn’t always continuously use tools unconsciously. We tend to bounce back and forth between the unconscious use of tools and the conscious use of tools. We bounce back to the conscious use, when our attention changed by an external reasons like having unfamiliar experience from the tools (ex: malfunction of the tool) or internal reasons like sudden realization that we are using a tool. Other than that our consciousness can also directed completely away from the work (ex: thinking something else while doing a repetitive task).

This extension of consciousness can make more efficient by the technology. Use of the better devises can make the period of unconscious use much longer (by reducing the facts that break the unconscious process mentioned in the above paragraph) thus really making the tools feel like a part of ourselves. An extreme form of this is kind of like the idea in the movie Surrogates (2009) where each human control a synthetic body which is similar to him or herself and which has controllers that are connected directly with the each one’s brain. In a way we can say that this another way of which humans evolve, by extending themselves. It is to say that by inventing more and more complex tools we have been able to extend our self or our consciousness more and more father. We can also kind of go inside with this too by saying that body is a tool that our brain or mind is using unconsciously.

So, I have tried to explain the idea of extended consciousness or the widening of the boundary which separates an individual from the external world. Also, I have tried to consider this as a way which humans evolve. So, how this evolution proceeds in the future is a matter of technological development process.

image

[Uncanny Valley, Source:  http://www.everything-robotic.com/2014/04/our-relationship-with-uncanny-valley.html]

Mary Shelley’s Frankenstein is a novel published in the 1818, which tells a tragic story about a scientist called Victor Frankenstein and a creature he created who tries to get revenge from his maker.  When reading that story we can see that the monster wasn’t actually evil in the beginning. But later he became an evil destructive force, mainly because of the fear and rejection of the people.

In this article I try to find the same or similar issues regarding the creation of intelligence or consciousness machines (or otherwise AI). The question I ask is will humans ever accept AI as sentient beings (or maybe conscious beings) and stop treating them as tools or property when machines have actually become advanced enough. The difference between the Frankenstein monster and machines (when I’m using the machines in this article I mean intelligence and advance enough machines) is that unlike machines people reject monster because of the fear. Regarding machines it is more like seeing them as objects which are unequal and lesser than living beings (some also have a fear too I guess). I’m not telling that AI will become destructive or going to talk much about the technical reasons for AI to become destructive. I’ll rather be talking about the human attitude towards machines, how it came to be and how it will be in the future.

As you all already know in normal or current situation the AIs are not considered as sentient beings. They are considered as tools or devices that helps human in their work. But if AI became capable of feeling (or became conscious) do we still feel the same about AI? Or you can ask what will machines needs, making it a moral concern of humans? Do humans capable of feeling anything else towards non humans (and may be something that is not also an animal)? 

An answer that some people (may be most) will give is that AI can never have consciousness and because of that they are not more than tools. Or maybe they will say that AI’s feelings are just programming rules added by humans and they are not real feelings. So the first answer assumes that consciousness is not reproducible by humans. So it somehow identifies consciousness as something unique and special and used it to justify the idea that humans are unique and special (It can be other way around too, may be human’s own conception (miss- conception) that they (and may be (some) animals) are somehow better justifies the uniqueness of consciousness. By the way I’m not saying that consciousness is not important. I’m only saying that it can be reproduced and things other than humans may be able to have consciousness). But the problem is we cannot prove this uniqueness and also we cannot really know if anyone other than ourselves is conscious either (including other humans). So how does this belief come to the peoples’ minds? Maybe it came because of the religion or some of the philosophies because these sometimes suggest the idea of nonphysical mind or a soul (if that is the belief, then this argument doesn’t apply to you since idealists have their own arguments to support their views). But we see this belief or intuition in people who don’t believe in the existence of the nonphysical world also. Maybe the idea that we humans have something special is more of an unconscious one we developed through the effects of the society rather than a conscious belief.

Also the argument that says that in AI feelings are programmed ones also has the same kind of intuition that was about consciousness which is humans are somehow different. But human feelings are created by the brain and worked with our senses, which would be the same as in AI. Other than that humans are also controlled by unconscious desires which act just like programmed functions would act, to fulfill basic needs of a human. And this shows that AI or machines would have feelings and ours are not so different after all.

May be the feeling that machine or an AI is not worth accepting as sentient or something more than a tool, might have come to the minds of the people because we made AI (or going to make the AI). This can be true since for a long time human only create tools and devices and it may have given us a conscious or an unconscious idea that what we build is not sentient or better than us or equal to us. But the problem is that humans are also made by humans. But still there is a difference between how humans make humans and how humans make machines. The AI is built in the way that we build tools, crafts or equipments. But when we make humans, it happens in a different way. And also we have been evolved to take care of our infants. But this evolutionary habit may not be there towards AI, regardless of the fact that AI is conscious (and sentient) or not.

This rejection (may be even fear) towards the machines may also cause by biological reasons. There can be an unconscious (or maybe even conscious) fear in humans towards things other than humans. And this fear may have been the fact that ensures our survival in the early ages of human evolution. Also, if the since conscious AI can have the form of humanoid robots (or we think they have it since that idea is given to us by the media) this fear or strangeness can be explained using the uncanny valley hypothesis. The uncanny valley which is a hypothesis in the field of human aesthetics, explains that  when human features (may be on a humanoid robot) look and move almost, but not exactly, like natural human beings, it causes a response of revulsion among some human observers. And the explanations given to this hypothesis will be able to explain the reasons for rejection we have towards to machines.

The fact that an AI or a robot not being a biological entity can also be a reason for not accepting the AI. Since we all are biological entities and also we grow seeing only the biological entities as living beings, a notion can be created in people that somehow only biological entities can be alive. And that can make the humans to reject the sentience of machines too.

Will this ideas change? And the answer will depend on how humans and technology interact and how advance will the machines become in the future. Maybe the people in the future will develop empathy towards machines since they grow with them as much they grow with other humans. And this may even change the notion of human uniqueness too. But we cannot really know exactly so it is still a question that need answering.

image

Creativity can be defined as a process of creating something original and valuable. It can be the creation of an idea, a story, a joke, a painting, music or even a strategy or a solution. This kind of a creative task mostly includes a chain or a tree of decisions. For an example, if we are telling a story, we have to make decisions like what will be the places, what will be the time period, who will be characters and what incidents will be there in the story.

When we talk about AI, we can see that AI have been used in various tasks such as creating strategies in games (ex: Chess), that require creativity for some time. There were also some attempts in teaching computers to do activities such as write stories, create paintings and music. Although machines can perform some of these tasks better (ex: playing board games), there are also some tasks that machines are not very good at. For an example, making a joke or telling a story is not something that the AI are very good when compared to humans.

In most AI approaches that tries to engage in a creative process, finding a solution for a problem require choosing an optimal solution or a set or a chain of solutions from a pool of candidate solutions. This solution pool can be a structure like a tree (ex: a game tree) or a list. And the selection process may include an objective function which can measure the strength of a given solution. Also, this can also include learning which ether modifies the selection process or modifies the existing solutions in the solution pool or add new solutions to the pool. This approach works well when the number of solutions in the pool is low and the efficiency of the selection mechanism is high. But when the number of solutions in the pool grows exponentially and\or when we are unable to define a better selection mechanism this approach doesn’t work well.

How does the human brain engage in such a creative process is a bit of a question. It seems that the brain makes decisions using its highly parallelized processing capabilities. At least the brain need to do parallel processing  to make the unified perception since it need to process information from multiple sensory organs simultaneously.  It is not clear that humans make decisions by unconsciously  selecting solutions from all available solutions or by focusing on an individual solution  (which may be selected from a limited number of solutions it can generate). But somehow the brain manages to do some tasks which computers are not good at, reasonably well. And that mainly includes tasks that have an exponential number of possible solutions.

As I mentioned earlier, tasks like writing stories, poems, jokes, etc. are something that humans are better than humans. This may due to the fact that the solution pool for this kind of a task is considerably large. For an example, if we want to describe an imaginary incident, we create a mental representation of the actual or imaginary world that act according to some set of rules and create a chain of events in the world. When we direct this event chain, we choose these events according to the feelings we want to make our audience feel and the end goal we want from the task. But how do we choose a certain event over another is the most important problem that we need to solve. There can be a large number of events to choose from in each step of the incident and that make the solution pool exponentially large. Also, what is the criteria that should be used to select one event rather than the other is also something that is hard to teach a computer.

Other than that, in this kind of a task, evaluating the intermediate steps and the end solution (the story, the poem or the joke) is hard without making the machines understand the meanings of the story or the words. For humans, these meanings are formed through sensory experiences. And it is not easy to fully explain this through language. So it is not that easy to create a machine that interprets these meanings (And these sensory experiences are not relevant to the machines ether since their sensory experiences are different from humans, they may have more or less or even different sensors).  Also, we don’t exactly know what kind of a structure we need to hold these meanings.

But the meaning of a word for a machine doesn’t have to be the same as ours. The meaning of a word can be the collection of other words that it relates to and the type of the relations that they have with those words. And a technique like the semantic network can be used in holding these meanings (I talked about something like this in the article, http://tharindra-galahena.tumblr.com/post/88141800516/artificial-imagination). But achieving higher complexity of theses in these kind of structures and defining relations between these kind of structures and human emotion levels can still be a bit of a problem.

But one can argue that the meaning is not always important in creating something such as a story. A chatbot can answer questions reasonably well without knowing the meanings of the answers. It uses a dataset of text to learn the most probable answer to a given question. And techniques like Markov chains can be used to create text just using the probabilities of each word appears after a given word in a huge corpus of text. But the applicability of this method in creating long stories or jokes is problematic. Creating a simple sentence or two is not hard since solution pool for this will be low. But for a larger story, it can have a large number of choices to make and every choice can affect the future choices. Also the overall story must be directed according to a certain theme which makes this task harder. And all of this will be much easier if there is a better model for the meanings.

So like this, creativity is something that humans are still much better. Although, we cannot say that this will be the same for the future. But we cannot really know what’s going to happen since predicting the future is not something even humans are good at.