scribbles on AI

Some random ideas and stuff on AI, ALife, Cognitive Science, AGI, Philosophy, etc
Recent Tweets @inf0_warri0r
Who I Follow

 

Beauty is a well known quality we perceive that needs no introduction. From dawn of humanity, humans have found, enjoyed, described, analyzed and even created beauty. Humans see beauty in many things, from the softness and the tenderness of a flower to the bright orange color of a sunset and from a smiley face of a woman to a poem or a painting about her.

However, in this article I like to talk about another aspect of beauty. Can beauty only be perceived by humans? Can another intelligent and conscious being (AI) perceive beauty? Do these beings perceive beauty in the same things as human?  These are the questions that this article tries to discuss.

Philosophy of Beauty

The nature of beauty is a widely talked theme in western philosophy. It has been taken in to argue even in ancient Greek philosophy. In fact, it is one of the fundamental aspects in the philosophical aesthetics.

The most popular question about beauty is the question of whether beauty is a subjective or objective quality.  Most early philosophers like Plato or Aristotle believe that beauty is an objective quality.  But later around 18th century, philosophers like David Hume argued that the beauty is a subjective experience. But though the beauty can be considered as a subjective quality we can see that most of the things that we see as beautiful tend to be common or has common qualities. But these commonalities do not necessarily prove that beauty is objective ether.

Evolutionary Aesthetics

Evolutionary aesthetics suggests that the basic aesthetic preference of humans has evolved in order to enhance survival and reproductive success.  According to this theory the factors like color preference, preferred mate body ratios, shapes, emotional ties with objects, and many other aspects of the aesthetic experience can be explained though theories of evolution.

For example, for the humans’ aesthetic preferences to landscapes can be a concept developed by humans through evolution, which helps humans to selecting a good habitat to live in. Also in the same way the beauty of the human body is connected with the human reproduction. Even art forms like music can be considered as products of evolution. The field of  Evolutionary musicology studies this relationship of music perception and production and evolutionary theory.

So, according to this theory the reason for seeing beauty in certain  things can be a concept which is hardwired into our brains by evolution. And this theory can explain the commonalities in the objects that each person sees beauty in.

Beauty according to AI

If we consider beauty as an experience that intelligent and sentient beings perceive, then the strong AI which is sentient and intelligent like humans may be able to perceive it too. But this may not work if the perceiving of beauty is only a quality of human intelligence and AI have another type or a mechanism of producing intelligence. Although may be the type of intelligence (or sentience or consciousness may be) that AI posses will be able to create their own concept of beauty (or something similar to the concept of beauty) according to the mechanisms it have. 

For humans, beauty can have a strong relationship with emotions. This may also be applied to AI. But the requirement here is that AI must be capable of producing emotions. And that will depend on the architecture and the evolution process of the AI’s mind. So what will be the emotions of AI? If the AI has an architecture that is similar to humans, then like humans the AI will also have set of desires. And these desires can generate more complex emotions. So now this becomes the question of AI’s desires. For humans, desires are about the survival or attending the basic needs (ex: food). So for AI their desires will be about their needs. When they have these desires more complex emotions will be constructed around these desires depending on their mental architecture. But whether humans can understand these emotions is a bit of a question. Even we can understand the desires of AI, it will be little hard to understand the emotions of AI since they are more complex than basic desires. And that makes it harder to understand their concept of beauty. And the things that they see as beautiful may not be beautiful to us. Also, the things that we see as beautiful may not be beautiful to them.

As I explained earlier in Evolutionary Aesthetics section beauty can also be a concept we developed through evolution. This same principle can also be applied to the AI. AI’s perception of beauty can be a concept hardwired into the AI’s brain, which they developed through their evolutionary process. If this is so, in order to understand the AI’s concept of beauty we should be looking into the evolutionary process of the AI. If the evolutionary process of AI is complex and fast, then the AI’s concept of beauty can be more complex than humans . Other than that if AI has the ability of self-modification or control over their evolution, that also will make their concept of beauty more complex.  Their perception of beauty and also their Art forms would be more complex and will even be based on different principles than humans due to this difference in evolution and also because of their increased intelligence .

Beauty can also be a planted or hardwired idea in the AI’s brain by humans. If this is true, then the AI will see the beauty in the same things that humans see the beauty in (or they will see the beauty in what we want them to see beauty in). The current attempts in AI research to give the AI the ability to create art is kind of doing the same thing. But I think the AI  will alter or change these ideas and make their own ideas about beauty. They won’t necessarily change or remove the planted ideas. But the ideas can evolve according to the evolution of the AI’s mind. Also the AI sometimes doesn’t have the same conditions as the humans to see beauty in a certain thing. For an example, we can see the beauty of a woman and AI can have a planted idea of the beauty of that woman. But we cannot accept that the AI has the same attraction (may be biological) towards that woman like us humans.

So like this the concept of beauty in AI can be a bit different than the human concept of beauty. But I think that they can evolve into perceiving beauty in their own way. And I think that is truly beautiful.

digg:

Happy birthday Nikola Tesla!

digg:

Happy birthday Nikola Tesla!

Machines and us.

image

The collective behavior is a common type of behavior among humans as well as animals. It is defined in Dictionary.com as,

“The spontaneous, unstructured, and temporary behavior of a group of people in response to the same event,  situation,  etc.”

But for this article, we only consider the behavior of a set of machines or AI agents which behave individually according to their inputs and their internal factors and rules. These inputs can come from the environment which they are part of as well as from their fellow agents. Also for this article both structured and unstructured behaviors are being considered.

Collective behavior in machines isn’t a new idea. Swarm intelligence which is introduced by Gerardo Beni and Jing Wang in 1989,  study  the collective behavior of decentralized, self-organized, natural or artificial systems. And this concept mostly used in the field of artificial intelligence.

These kind of studies is mostly focused on intelligence. But when it comes to consciousness, it is not that clear. The purpose of this article is to see whether this structured or unstructured collective behavior of AI agents can produce consciousness.

Collective behavior of AI agents and consciousness.

Fist of all, let’s look at the human body. If we consider a cell in the human body as a small micro machine we can say that the human body is a set of machines working to gather. And if the consciousness is an emergent product of the brain, then we might be able to say that the consciousness is a product of the collective behavior of neurons. So can we apply the same theory to AI. Can we make set of programs, interconnected with each other that can ultimately create consciousness. Actually, we have already done this. We have artificial neural networks. But the biological neurons are much more complex that actual neurons.  So the artificial neurons must also become more similar to actual neurons.

But maybe it is only the functions or the inputs and outputs of neurons that we have to mimic (not the inner workings of the neurons). We don’t need to make artificial neurons, which are identical to real neurons in every way. China brain (also known as the Chinese Nation or Chinese Gym) is a thought experiment in philosophy of mind about this kind of a brain simulation through structured collective behavior. It is proposed by Ned Block and explains a situation where each citizen in China is asked to simulate the actions of a one neuron in the brain using using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Although Block argues that China brain will not have a mind Some philosophers, like Daniel Dennett, have concluded that the China brain does create a mental state. If the china brain can create consciousness, replacing the people with machines which can simulate functions of neurons and also communicate may also produce a consciousness.

There is also a concept of collective consciousness  in sociology. In the field of sociology, collective consciousness  is a term introduced by French sociologist Émile Durkheim (1890) to identify a shared framework  or a set of shared beliefs, ideas and moral attitudes which operate as a unifying force within society. If we consider a group of people as a single being, we can actually see some of the qualities of a consciousness within it. For example, we can see that a certain group of people or a community has an awareness of what’s happening in that group or community and also a control over it. And the self awareness and control are the essential parts of consciousness. And also a group can even conceptualize self which creates a boundary between that group and other similar groups (for example two nations). So, according to that theory even the unstructured or semi-structured collective behavior can maybe create consciousness.

But when applying this theory to the machines, we bump into a bit of a  problem. Unlike machines, human society is comprised with conscious beings. So now the question will be, Can consciousness get emerged from the collective behavior of a group of agents who don’t have consciousness? In a society or a group, the actions of an individual person are consuming information,  processing them (adding new information, manipulating he information, taking decisions according to the information, creating new information or storing information in memory)  and passing them on to others in some form of communication. And all of these information processes don’t require consciousness (a philosophical zombie, which acts in the same way as a human can do the same things). So I don’t think it is necessary to have conscious beings to create a collective consciousness in this way. But these beings (or AI agents) must have the necessary intelligence be able to behave collectively like humans.

So we can see that consciousness through collective behavior might be a possibility. But still the individual AI agents should possess a considerable level intelligence in order to make the consciousness emerged from the collective.

OK… this is my 10th article (not 10th post, that’s like two weeks ago). And since ‘10’ is a round number, I thought of doing a little ‘looking back’ like thing (kind of… sort of…). First of all I‘ll say something about myself, although you can look it I my profile. I’m Tharindra and I’m from Sri Lanka (a  small country in south Asia). And I’m a just an average Computer Science  student at University of Colombo School of Computing. Currently I’m in my final (4th) year (other than that, really, there’s nothing much to tell). So  I started this blog (not exactly this Tumblr  blog) in 29/05/2014. In the beginning, it was on Quora.com (tharindra.quora.com). But later on 07/06/2014 I decided to move it to Tumblr. Although I don’t really know why I had the thought of starting a blog. Seriously, it’s completely out of character for me to do this kind of a thing (writing) since I am pretty lazy, especially in things like writing and I’m not good at it either. But I was interested in AI, cognitive science and philosophy (that doesn’t mean that I’m good at those things).

I received a number of comments for my work from a number of people over the past few months from various social media where I used to share my posts (Facebook groups, Linkedin groups, G+ communities, Twitter, Reddit, comp.ai.philosophy Google group and some other places.. OK, I know, sometimes I am a bit of an attention seeker (And from sometimes I mean most of the times). Comments were kind of all over the place. I got a wide variety of comments from comments like“does he even know even a little bit of programming/AI?” or “what a nonsense” to comments like “Very interesting concept and thought-provoking article”.  Actually, most of the comments were more negative than positive (After sometime  learn that it is more fun to have criticisms than to have appreciations). There were lots of comments about the writing  mistakes like typos, grammar errors and also about  the fluffy high school essay vibe. And I absolutely agree with them. Writing was not something that I was good at.  Apart from that there were also comments disagreeing about my points and arguments I raise in my articles.

When talking about the comments, the main thing I understood from the comments against my articles is that people have very different views about the strong AI and what will they be like. I think that some believe that Strong AI will be just there to serve humans like other weak AI. And also they can be controlled by programmed rules. Other than that, some argued that strong AI isn’t possible or these articles are useless since we don’t have strong AI yet. But when writing this article I choose to view strong AI as a separate species of conscious, sentient and intelligent (human level or above) beings which are independent from humans. They have some characteristics similar to humans and some characteristics different to humans. Also, sometimes I assumed that they may have minds with similar architecture to humans. Because of the human like architecture, I believe that they will have their own set of basic desires. But these desires won’t be the same as humans since their needs, senses, etc. are different to humans. Also because of these desires, they will also generate a unique set of emotions. The reason for me to think that they have human like architecture is because the easiest way to create strong AI will be to replicate the human mind (brain). Especially since the human brain is the only working prototype that we have which can produce intelligence and consciousness. But it doesn’t have to be exactly like this since we don’t exactly know whether the brain produce consciousness (what if the mind and the body are separate things?) or not and also since there can be another easier ways to generate consciousness (may be something like the evolutionary technique will be able to generate another system that has consciousness).

Since I choose human like  architecture for my imaginary AI I thought that AI may also develop their mind through gaining experience like a human child. And that will make them be influenced by humans (society) and other AI (And this was a key idea that governed most of my articles). Other than that, I also believed that AI will have an ability of self modification in both hardware and software vise. And that is the reason why I believe that strong AI cannot be controlled by programmed rules. They will be able to change them if they want.  So that is my version of strong AI. But this view can have inconsistencies. And also most probably most of the readers won’t have the same view about strong AI  as mine (And most probably their view will be correct one). Some may even see this more like science fiction (actually one comment did say that). And when talking about the comments on the possibility of strong AI I actually think that strong AI is possible (but I’m not saying that it is going to happen in the next 10 or 100 years. I think it will take much more time than that). There are many philosophical arguments against this like ‘Chinese Room’ argument. But there are also criticisms against those arguments also. What I was actually interested was to guess what will it be like to have another intelligent species around which are not exactly like humans. And also to ask questions like, What will they feel how will they behave or what kind of social or psychological aspects will they have or what kind impact will we have on them as their creators. And some did say that it is pointless to ask these now since we don’t have strong AI now. But I thought it would be a good exercises thought (Or sometimes I enjoy doing pointless things).

May be I didn’t express these ideas too well in my articles. As I said my writing sucks (but you already know that from the amount grammar and spelling mistakes I have). And I don’t know if you learn something (Or was something even there to learn?) or if you enjoy reading. If you didn’t enjoy or learn then I’m very sorry for wasting your time. And thank you for all your comments.

OK… That’s all.. Thank you for reading.

See you in the next article!!!