scribbles on AI

Some random ideas and stuff on AI, ALife, Cognitive Science, AGI, Philosophy, etc
Recent Tweets @inf0_warri0r
Who I Follow

Consciousness in AI is a topic which is argued by not only computer and cognitive scientists, but also philosophers. Philosophers like John Searle and Hubert Dreyfus have argued against the idea that a computer can gain consciousness. For an example, arguments like Chinese Room have been proposed against the idea of strong AI. But there are also philosophers like Daniel Dennett and Douglas Hofstadter, who have argued that the computers can gain consciousness.

Although there are debates about the how to create a conscious machine, for this article I choose to look at the creation of machine consciousness in another way.  Do we have to design the AI’s architecture with the conscious from the beginning to make a conscious AI? Will the AI be able to gain consciousness of its own? Or will the consciousness be emerged form when the AI’s architecture when it gained sufficient enough complexity by evolution or by self modification without human interference?

Consciousness without Human Design

Although consciousness is an important quality, defining it clearly is a somewhat difficult task. But we can roughly define it with two main components, the Awareness (Phenomenal Awareness) and the Agency. Awareness is ability to the external world and also feel or sense the content of the own mind. And the Agency is the control over external world and also the control over our self or the mental states. Which means the control the both behavioral (control external organs, hands, feet, etc.) and mental aspects. We should also be aware of the control  it to become conscious. We should know/feel that we have the control (or that we are doing it). The actions we are not aware like beating of the heart, breathing or things we do without thinking  (for example, walking or driving without thinking or concentrating on it or thinking about something else) aren’t taken as conscious actions. So after including all of these, we can define the consciousness as (or at least I’m using this definition for this article) the awareness and control over external objects and also awareness of ones own mental content. Another way of putting it is having a sense of self-hood.

According to the above definition of consciousness, we can see that the concept self is also linked with the consciousness. So, what is self? The self can be defined  as the representation of  one’s identity or the subject of experience. In other words self is the part that receives the experiences or the part that has the awareness. The self is an integral part in human motivation, cognition, affect, and social identity.

Concept of self may not be a something that we are born with. According to the psychoanalyst Sigmund Freud, the part of the mind which creates the self is developed later in the psychological development of the child. In the beginning a child only has the Id. Id is a set of desires which cannot be controlled by the child and only seeks pleasure (Pleasure Principle). But later in the development process a part of the Id is transformed into the Ego. And this Ego creates the concept of self in the child. Now the question will be, Can AI be developed into a stage where it can also create something like Ego like the human mind? If the AI has a structure which contains the necessary similarities to a human mind or the AI has an artificial brain similar to the human brain and nervous system, then AI may be able to undergo a process which create some sort of an Ego similar to human Ego. And for humans this Ego is created because of the interactions which a child has with the external world. So like that, maybe the influences which AI faces can trigger the creation of the Ego in AI.

According to the theory of Jacques Lacan the process of creating the self of a child happens in the stage called the mirror stage. In this stage the child (in 6-18 months of age) sees an external image of his or her body (trough a mirror or represented to the child  through the mother or primary caregiver) and identify it as a totality. Or in other words the child realized that he or she is not an extension of the world, but a separate entity from the rest of the world. And the concept of self  is developed through this process. So, can an AI go through this kind of a process or a stage and develop a self? Regardless of whether the structure of the AI is similar to a human mind or not, the realization of the fact that it is a separate individual from the first time will be a new and  revolutionary experience to AI (if the AI is sophisticated enough to process that kind of realization or experience in a proper way). So this kind of an experience may be able to make a change in the AI which may be able to give the AI an idea about self. But if this stage of AI is similar to the mirror stage, then the AI must also have a way of seeing its own reflection in order to undergo this kind of a process. If the AI has a body (robot, maybe) and doesn’t extend beyond that body then this won’t be a problem. But if the AI can be copied into new hardware or extend itself through a network or hardware, then defining its boundaries can be somewhat difficult. So seeing it as something that is not fragmented and has clear boundaries will a bit tricky. But if the architecture of the AI may allow a different way of defining boundaries and see it as an individual then this would work.

When we consider the other animals, we can see that an animal must have a certain complexity to have the self awareness (or consciousness). Methods like Red Spot Technique have shown that animals like some species of ape and dolphins have shown self awareness and some animals haven’t. So we can assume that AI must also have an architecture with sufficient enough complexity for it to develop a consciousness. So at some point in the process of evolution,  the AI must be able to achieve the necessary complexity, in order for the  AI to become conscious. But if the evolution of the AI is similar to the evolution process in Darwinian theory, then the AI which finally achieve the consciousness won’t be the ones that the process of evolution begins with because the new generation of AI is built by merging the best architectures of the old generation of AI and mutating them. So for this merging and mutating process the AI may need human assistance.

But a single AI also can undergo a sort of an evolutionary process of its own. And such process would be self improvement or more precisely recursive self improvement. Recursive self-improvement is the ability of an AI to program its own software or add parts to its structure or architecture (maybe  hardware vise too). So this process also will be able to make the AI achieve necessary complexity in some point.

Like that, maybe the AI will be able to produce consciousness through self modification, or through  a stage in their psychological development process by themselves without humans specifically designing it to be conscious from the beginning.

 

Beauty is a well known quality we perceive that needs no introduction. From dawn of humanity, humans have found, enjoyed, described, analyzed and even created beauty. Humans see beauty in many things, from the softness and the tenderness of a flower to the bright orange color of a sunset and from a smiley face of a woman to a poem or a painting about her.

However, in this article I like to talk about another aspect of beauty. Can beauty only be perceived by humans? Can another intelligent and conscious being (AI) perceive beauty? Do these beings perceive beauty in the same things as human?  These are the questions that this article tries to discuss.

Philosophy of Beauty

The nature of beauty is a widely talked theme in western philosophy. It has been taken in to argue even in ancient Greek philosophy. In fact, it is one of the fundamental aspects in the philosophical aesthetics.

The most popular question about beauty is the question of whether beauty is a subjective or objective quality.  Most early philosophers like Plato or Aristotle believe that beauty is an objective quality.  But later around 18th century, philosophers like David Hume argued that the beauty is a subjective experience. But though the beauty can be considered as a subjective quality we can see that most of the things that we see as beautiful tend to be common or has common qualities. But these commonalities do not necessarily prove that beauty is objective ether.

Evolutionary Aesthetics

Evolutionary aesthetics suggests that the basic aesthetic preference of humans has evolved in order to enhance survival and reproductive success.  According to this theory the factors like color preference, preferred mate body ratios, shapes, emotional ties with objects, and many other aspects of the aesthetic experience can be explained though theories of evolution.

For example, for the humans’ aesthetic preferences to landscapes can be a concept developed by humans through evolution, which helps humans to selecting a good habitat to live in. Also in the same way the beauty of the human body is connected with the human reproduction. Even art forms like music can be considered as products of evolution. The field of  Evolutionary musicology studies this relationship of music perception and production and evolutionary theory.

So, according to this theory the reason for seeing beauty in certain  things can be a concept which is hardwired into our brains by evolution. And this theory can explain the commonalities in the objects that each person sees beauty in.

Beauty according to AI

If we consider beauty as an experience that intelligent and sentient beings perceive, then the strong AI which is sentient and intelligent like humans may be able to perceive it too. But this may not work if the perceiving of beauty is only a quality of human intelligence and AI have another type or a mechanism of producing intelligence. Although may be the type of intelligence (or sentience or consciousness may be) that AI posses will be able to create their own concept of beauty (or something similar to the concept of beauty) according to the mechanisms it have. 

For humans, beauty can have a strong relationship with emotions. This may also be applied to AI. But the requirement here is that AI must be capable of producing emotions. And that will depend on the architecture and the evolution process of the AI’s mind. So what will be the emotions of AI? If the AI has an architecture that is similar to humans, then like humans the AI will also have set of desires. And these desires can generate more complex emotions. So now this becomes the question of AI’s desires. For humans, desires are about the survival or attending the basic needs (ex: food). So for AI their desires will be about their needs. When they have these desires more complex emotions will be constructed around these desires depending on their mental architecture. But whether humans can understand these emotions is a bit of a question. Even we can understand the desires of AI, it will be little hard to understand the emotions of AI since they are more complex than basic desires. And that makes it harder to understand their concept of beauty. And the things that they see as beautiful may not be beautiful to us. Also, the things that we see as beautiful may not be beautiful to them.

As I explained earlier in Evolutionary Aesthetics section beauty can also be a concept we developed through evolution. This same principle can also be applied to the AI. AI’s perception of beauty can be a concept hardwired into the AI’s brain, which they developed through their evolutionary process. If this is so, in order to understand the AI’s concept of beauty we should be looking into the evolutionary process of the AI. If the evolutionary process of AI is complex and fast, then the AI’s concept of beauty can be more complex than humans . Other than that if AI has the ability of self-modification or control over their evolution, that also will make their concept of beauty more complex.  Their perception of beauty and also their Art forms would be more complex and will even be based on different principles than humans due to this difference in evolution and also because of their increased intelligence .

Beauty can also be a planted or hardwired idea in the AI’s brain by humans. If this is true, then the AI will see the beauty in the same things that humans see the beauty in (or they will see the beauty in what we want them to see beauty in). The current attempts in AI research to give the AI the ability to create art is kind of doing the same thing. But I think the AI  will alter or change these ideas and make their own ideas about beauty. They won’t necessarily change or remove the planted ideas. But the ideas can evolve according to the evolution of the AI’s mind. Also the AI sometimes doesn’t have the same conditions as the humans to see beauty in a certain thing. For an example, we can see the beauty of a woman and AI can have a planted idea of the beauty of that woman. But we cannot accept that the AI has the same attraction (may be biological) towards that woman like us humans.

So like this the concept of beauty in AI can be a bit different than the human concept of beauty. But I think that they can evolve into perceiving beauty in their own way. And I think that is truly beautiful.

digg:

Happy birthday Nikola Tesla!

digg:

Happy birthday Nikola Tesla!

Machines and us.

image

The collective behavior is a common type of behavior among humans as well as animals. It is defined in Dictionary.com as,

“The spontaneous, unstructured, and temporary behavior of a group of people in response to the same event,  situation,  etc.”

But for this article, we only consider the behavior of a set of machines or AI agents which behave individually according to their inputs and their internal factors and rules. These inputs can come from the environment which they are part of as well as from their fellow agents. Also for this article both structured and unstructured behaviors are being considered.

Collective behavior in machines isn’t a new idea. Swarm intelligence which is introduced by Gerardo Beni and Jing Wang in 1989,  study  the collective behavior of decentralized, self-organized, natural or artificial systems. And this concept mostly used in the field of artificial intelligence.

These kind of studies is mostly focused on intelligence. But when it comes to consciousness, it is not that clear. The purpose of this article is to see whether this structured or unstructured collective behavior of AI agents can produce consciousness.

Collective behavior of AI agents and consciousness.

Fist of all, let’s look at the human body. If we consider a cell in the human body as a small micro machine we can say that the human body is a set of machines working to gather. And if the consciousness is an emergent product of the brain, then we might be able to say that the consciousness is a product of the collective behavior of neurons. So can we apply the same theory to AI. Can we make set of programs, interconnected with each other that can ultimately create consciousness. Actually, we have already done this. We have artificial neural networks. But the biological neurons are much more complex that actual neurons.  So the artificial neurons must also become more similar to actual neurons.

But maybe it is only the functions or the inputs and outputs of neurons that we have to mimic (not the inner workings of the neurons). We don’t need to make artificial neurons, which are identical to real neurons in every way. China brain (also known as the Chinese Nation or Chinese Gym) is a thought experiment in philosophy of mind about this kind of a brain simulation through structured collective behavior. It is proposed by Ned Block and explains a situation where each citizen in China is asked to simulate the actions of a one neuron in the brain using using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Although Block argues that China brain will not have a mind Some philosophers, like Daniel Dennett, have concluded that the China brain does create a mental state. If the china brain can create consciousness, replacing the people with machines which can simulate functions of neurons and also communicate may also produce a consciousness.

There is also a concept of collective consciousness  in sociology. In the field of sociology, collective consciousness  is a term introduced by French sociologist Émile Durkheim (1890) to identify a shared framework  or a set of shared beliefs, ideas and moral attitudes which operate as a unifying force within society. If we consider a group of people as a single being, we can actually see some of the qualities of a consciousness within it. For example, we can see that a certain group of people or a community has an awareness of what’s happening in that group or community and also a control over it. And the self awareness and control are the essential parts of consciousness. And also a group can even conceptualize self which creates a boundary between that group and other similar groups (for example two nations). So, according to that theory even the unstructured or semi-structured collective behavior can maybe create consciousness.

But when applying this theory to the machines, we bump into a bit of a  problem. Unlike machines, human society is comprised with conscious beings. So now the question will be, Can consciousness get emerged from the collective behavior of a group of agents who don’t have consciousness? In a society or a group, the actions of an individual person are consuming information,  processing them (adding new information, manipulating he information, taking decisions according to the information, creating new information or storing information in memory)  and passing them on to others in some form of communication. And all of these information processes don’t require consciousness (a philosophical zombie, which acts in the same way as a human can do the same things). So I don’t think it is necessary to have conscious beings to create a collective consciousness in this way. But these beings (or AI agents) must have the necessary intelligence be able to behave collectively like humans.

So we can see that consciousness through collective behavior might be a possibility. But still the individual AI agents should possess a considerable level intelligence in order to make the consciousness emerged from the collective.