scribbles on AI

Some random ideas and stuff on AI, ALife, Cognitive Science, AGI, Philosophy, etc
Recent Tweets @inf0_warri0r
Who I Follow


Creativity can be defined as a process of creating something original and valuable. It can be the creation of an idea, a story, a joke, a painting, music or even a strategy or a solution. This kind of a creative task mostly includes a chain or a tree of decisions. For an example, if we are telling a story, we have to make decisions like what will be the places, what will be the time period, who will be characters and what incidents will be there in the story.

When we talk about AI, we can see that AI have been used in various tasks such as creating strategies in games (ex: Chess), that require creativity for some time. There were also some attempts in teaching computers to do activities such as write stories, create paintings and music. Although machines can perform some of these tasks better (ex: playing board games), there are also some tasks that machines are not very good at. For an example, making a joke or telling a story is not something that the AI are very good when compared to humans.

In most AI approaches that tries to engage in a creative process, finding a solution for a problem require choosing an optimal solution or a set or a chain of solutions from a pool of candidate solutions. This solution pool can be a structure like a tree (ex: a game tree) or a list. And the selection process may include an objective function which can measure the strength of a given solution. Also, this can also include learning which ether modifies the selection process or modifies the existing solutions in the solution pool or add new solutions to the pool. This approach works well when the number of solutions in the pool is low and the efficiency of the selection mechanism is high. But when the number of solutions in the pool grows exponentially and\or when we are unable to define a better selection mechanism this approach doesn’t work well.

How does the human brain engage in such a creative process is a bit of a question. It seems that the brain makes decisions using its highly parallelized processing capabilities. At least the brain need to do parallel processing  to make the unified perception since it need to process information from multiple sensory organs simultaneously.  It is not clear that humans make decisions by unconsciously  selecting solutions from all available solutions or by focusing on an individual solution  (which may be selected from a limited number of solutions it can generate). But somehow the brain manages to do some tasks which computers are not good at, reasonably well. And that mainly includes tasks that have an exponential number of possible solutions.

As I mentioned earlier, tasks like writing stories, poems, jokes, etc. are something that humans are better than humans. This may due to the fact that the solution pool for this kind of a task is considerably large. For an example, if we want to describe an imaginary incident, we create a mental representation of the actual or imaginary world that act according to some set of rules and create a chain of events in the world. When we direct this event chain, we choose these events according to the feelings we want to make our audience feel and the end goal we want from the task. But how do we choose a certain event over another is the most important problem that we need to solve. There can be a large number of events to choose from in each step of the incident and that make the solution pool exponentially large. Also, what is the criteria that should be used to select one event rather than the other is also something that is hard to teach a computer.

Other than that, in this kind of a task, evaluating the intermediate steps and the end solution (the story, the poem or the joke) is hard without making the machines understand the meanings of the story or the words. For humans, these meanings are formed through sensory experiences. And it is not easy to fully explain this through language. So it is not that easy to create a machine that interprets these meanings (And these sensory experiences are not relevant to the machines ether since their sensory experiences are different from humans, they may have more or less or even different sensors).  Also, we don’t exactly know what kind of a structure we need to hold these meanings.

But the meaning of a word for a machine doesn’t have to be the same as ours. The meaning of a word can be the collection of other words that it relates to and the type of the relations that they have with those words. And a technique like the semantic network can be used in holding these meanings (I talked about something like this in the article, But achieving higher complexity of theses in these kind of structures and defining relations between these kind of structures and human emotion levels can still be a bit of a problem.

But one can argue that the meaning is not always important in creating something such as a story. A chatbot can answer questions reasonably well without knowing the meanings of the answers. It uses a dataset of text to learn the most probable answer to a given question. And techniques like Markov chains can be used to create text just using the probabilities of each word appears after a given word in a huge corpus of text. But the applicability of this method in creating long stories or jokes is problematic. Creating a simple sentence or two is not hard since solution pool for this will be low. But for a larger story, it can have a large number of choices to make and every choice can affect the future choices. Also the overall story must be directed according to a certain theme which makes this task harder. And all of this will be much easier if there is a better model for the meanings.

So like this, creativity is something that humans are still much better. Although, we cannot say that this will be the same for the future. But we cannot really know what’s going to happen since predicting the future is not something even humans are good at.

Synthetic intelligence is another term used for the artificial intelligence which gives the meaning that the intelligence of a machine doesn’t need to be an imitation of the human intelligence, it can have a genuine form of intelligence off  its own.  It means that the machine would generate intelligence in their own way instead of duplicating how humans make decisions.

In this article a somewhat similar concept is discussed regarding the consciousness. In here the term synthetic consciousness is used in the sense that a machine can become conscious with different mechanisms rather than using the mechanisms of the human brain. But in artificial consciousness the machine is created by directly simulating or duplicating the human brain.

According to the synthetic consciousness concept, it asks the question do we need to replicate the architecture of the brain to create a conscious machine. It’s kind of like asking, do we need to make an airplane that flap wins like a bird to make it fly. An airplane can fly without flapping wins. But the problem is that, though they both fly, they have differences in some functions. For an example a bird can take off without using a runaway, land on a tree. Like that we may be able to replicate some of the functions without using original mechanisms which are needed for something to become conscious, but that may not be sufficient. Also, we cannot be sure that we know all the functions either. Since most of our definitions about consciousness are not complete it will be somewhat hard to recreate them all. Other than that our level of understanding about some of the functions that we know to be part of the consciousness may not be enough too.

But this doesn’t necessarily mean that this approach doesn’t work either. It’s a matter of finding out all the functions of the consciousness. Although the problem is that different theories tend to put different things inside the domain of consciousness. Also, what’s meant by each function (ex: subjectivity, awareness) may also be different in different theories. But in a way the synthetic consciousness can be used as a tool to test those theories about consciousness. All we have to do is make machines according to different theories and see if they are conscious. But there are two problems. One is that it is not that easy to create these functions. Two is that we cannot really know if someone or a machine is conscious.

An important problem in philosophy about consciousness is known as the hard problem of consciousness which is proposed by David Chalmers. It divides the problems about consciousness into two categories, hard problems and easy problems. And the hard problems are questions like,

  • "How is it that some organisms are subjects of experience?"
  • "Why does awareness of sensory information exist at all?"
  • "Why do qualia exist?"
  • "Why is there a subjective component to experience?"
  • "Why aren’t we philosophical zombies?"

And the easy problems are,

  • The ability to discriminate, categorize, and react to environmental stimuli
  • The integration of information by a cognitive system
  • The reputability of mental states
  • The ability of a system to access its own internal states
  • The focus of attention
  • The deliberate control of behavior
  • The difference between wakefulness and sleep.

Though it is easy to answer the easy problems, the hard problems are not easy to understand or answer. One response to this theory proposed by Daniel Dennett is that  there is no hard problems separate from the easy problems. If you solve the easy problems the hardest problems will be automatically solved. This can be tested using AI. If we can create a machine which have all the functions described in the easy problems of consciousness, then we can see if the machine can gain consciousness.

Another way is to get the synthetic consciousness into a lower level. It means that instead of creating mechanisms to output the functions of the whole brain, make a network of machines to output the functions of different brain areas. For an example, we can create devices which do all the functions of brain areas like temporal lobe, occipital lobe, parietal lobe or frontal lobe. We can also go deeper with this by creating devices which has the same functions of neurons and connect them. But when we go deeper and deeper the synthetic consciousness moves more and more towards artificial consciousness. Nevertheless, in here also the limit of our knowledge on the functions of brain areas or neurons will be a problem.

There is one thought experiment proposed by Ned Block about neuron functions. It’s called Chinese Nation (also known as Chines Gym or China Brain). In this thought experiment the neural structure of the brain is created by assigning each citizen in China to simulate the actions of a one neuron in the brain using telephones or walkie-talkies to simulate the axons and dendrites that connect neurons. Although Ned Block argued that this cannot create a mind, some philosophers, like Daniel Dennett,  have argued that the China brain can create a mental state.

In artificial consciousness approaches, we create a machine which has an architecture similar to the brain and the nervous system. This approach doesn’t require a full understanding of the consciousness. Instead, we can use this approach to study and understand how is the consciousness emerged through the mechanisms of the brain. So for this approach, we need to recreate the neural network structure in the brain (similar to the approach where we make devices that has the functions of neurons in above paragraph). But the problems is that our understanding of the brain is not complete either. We must understand the processing levels in the brain. What is the lowest processing unit in the brain we can replace with devises? Is it the neurons? Or is there any molecular level processing? Or even quantum level processing which was proposed in the Orchestrated Objective Reduction by Sir Roger Penrose and Stuart Hameroff. We cannot use this approach without answering these questions.

Like that there are two approaches of creating consciousness. And both of these approaches as important questions. How does consciousness emerge from the brain? And Is it only the brain that can have consciousness? Can there be other mechanisms that can create consciousness? Hopefully we will find answers to both of these questions in the future.

Consciousness is a very hard term to define. And also it is hard to convince your definitions to someone since everyone has some sort of an intuitive idea about it (Daniel Dennett talks about this in one of his TED talks, “it’s very hard to change people’s minds about something like consciousness and I finally figured out the reason for that… the reason for that is that everybody is an expert in consciousness…”). Although an extremely simple way of explaining the meaning of it would be, the consciousness is the awareness and control over external objects and also ones own mental content. Or it can be also defined as sentience, having a self-hood, subjectivity or the ability to experience or to feel.

One of the first questions about consciousness in philosophy is the mind-body problem. And the first influential philosopher talk about this was Rene Descartes (1596). According to him consciousness (or mind) is made of mental substance (res cogitans) which is one of the two substances that the universe is made of (other one is physical substance or res extensa). This is known as the Cartesian Dualism. But later other theories which only believe in one substance were also proposed (Monisms). Three types of these theories (monisms)  are, physicalism (holds the view that mind is comprised of matter), idealism (holds the view that the matter or the physical world is an illusion and what only exists is the mind)  and neutral monism (holds the view that both mind and matter are made of another distinct essence that is itself identical to neither of them).

One of the earliest philosophers who tried to define the actual term consciousness was John Locke (1960). He defined it as “the perception of what passes in a man’s own mind”. Another important view on consciousness is given by the philosopher and psychologist William James (1890). He is the one who proposed the idea, the flow of consciousness or the consciousness flows like a stream. And he defined psychology as the description and explanation of the state of consciousness.  Also, he stated five characteristics of consciousness, Personal subjectivity, Constant Change, Continuity despite the change, Intentionality and Selective Attention.

One interesting topic is what would it be like not being conscious (actually this question itself is meaningless, but now let’s just go with it). I like to use an example for this. Let’s say that we have some kind of a device, attached to our body. And it can take inputs add/or it can give outputs (like for an example, someone can press a key on the device and it will type or display a character like a typewriter may be). But we are unaware of this (we don’t feel anything when someone presses the keys). So if we take the whole system, devise and the person , then one we can say that typing process is one of an unconscious process of the system. So now imagine that all of our sensory organs are like that device (they can be interconnected to and do some processing and give output). Then what would it be like? (Not having external awareness). In this situation we still take sensory inputs and also even process and output, but we aren’t aware of it. Although we are taking inputs it doesn’t necessarily mean that we are always aware of the process. But of course in the above example, we would still have our awareness of our mental content since we are humans. Not having even that would be like deep sleep (without dreams) or coma (I think) or someone struck you in the head and render you unconscious. But as I said before this question is not actually correct. Since as Thomas Nagel (1960) said  the ‘it be like’ part, itself is the consciousness (Nagel’s definition of consciousness is that if an organism is conscious then there is something that’s like to be that organism). Or otherwise to feel is to be conscious.

According to some philosophers, consciousness is an illusion. And that it is only a product of the information processing in the brain. Or in a another words a virtual self. That theory explains that, our decision making and other mental processes are always unconscious and happens as parallel processes, but when we introspect or ask our self what we are thinking now, a content emerges according to the current processes going on in the brain as an answer to the question. And that means that consciousness only exists when we look into our minds.  This isn’t an unreasonable argument.  But I think even consciousness is an illusion it still useful (and easy)  to have consciousness since it gives the notion free will, emotions, beliefs,  sense of self, etc.. Also, it is important that we can actually ask ourselves questions or introspect.

According to two-factor theory (Schachter-Singer theory) of emotions are based on two factors, physiological arousal and cognitive label or experience of emotion.  So the emotion is a result of being aware of the physiological effect or the conscious experience of the physiological effect. So, according to this the emotions are a result of consciousness. According to the James–Lange theory which is another theory of emotions the conscious experience is secondary to the physical effect. But still there is the conscious experience.

Beliefs which are also like emotions, needs consciousness. As all conscious experience, a belief has  a subject (the believer) and an object of belief (the proposition).  And also like in the emotions we should be aware of the belief, although the belief is not a physical effect but a content of our own mind. So, according to this, the beliefs also need consciousness.

And when we talk about AI (not strong AI) or autonomous agents, they are more like the devise example except they do not have the human in it (so they do not have internal awareness). So what do we need in order for consciousness to emerge from their information process? (If the consciousness isn’t a non physical entity) Or what is the difference in their information processes that aren’t conscious and ours which is conscious? Let’s look at this from another angle. What is the difference between our unconscious information processes (for an example if we accidentally touch a burning object we immediately take away our hand without consciously deciding to do it, the thoughts about it comes later)  and our conscious processes (for example I make a conscious decision to lift my hand and the hand goes up).  Since both conscious and unconscious processes of ours happens in our brain and the rest of the nervous system, the difference between unconscious and conscious information process must be the architecture, speed and complexity of the parts of the brain and nervous system that contributes to each of these different processes. And if we get back to the AI, I think that weak AI (autonomous agents) is more similar to our unconscious processes (not exactly the same, but what happens is kind of the same). So like in the brain, for an AI information process to be conscious it must have an appropriate architecture with relevant complexity and also the speed. The speed and complexity reply to the Chinese Room argument by Paul and Patricia Churchland (Churchland’s luminous room thought experiment) gives the same idea.

Another problem about consciousness which especially comes in AI is that, how do we know if someone is conscious? And this problem is known as the problem of other minds (given that I can only observe the behavior of others, how can I know that others have minds?).There are two answers given to this problem, type physicalism and philosophical behaviorism. The type physicalism suggests that a given brain state is responsible for a corresponding mental state, and then if someone has a given brain state than he is in the corresponding mental state. But the problem in this approach is that we cannot be certain that the brain state is causing the mental state (What if both the brain state and the mental state are cause by something else). And also another problem in this approach is, can another type of brain state can give the same mental state? (Especially in AI).  The other approach the philosophical behaviorism (or logical behaviorism) state that to have a certain mental state is just to behave in certain ways. So to know someone is in a mental state you only has to observe his or her behavior (since mental state is the behavior). But the problems with this approach are that, someone can feign a mental state and how to capture qualitative nature of an experience.

Something I saw when I go through the feedback for my earlier articles is that people are sort of uncomfortable when they talk about consciousness. I guess mostly because it is hard do define consciousness. Especially people who works in or interested in AI tent to believe that consciousness an illusion and/or AI doesn’t need consciousness because they are just tool (mostly because that makes their life easier, I guess, since you can not build something that you aren’t fully understand). And people who don’t work or interested in AI, but interesting in philosophy just through the Chines Room and close the case when asked about consciousness in AI. But for me consciousness is interesting because it’s hard to define. It’s kind of a mystery (if nothing else, the fact that the human evolution  can create something like consciousness is amazing). And it is also sort of an unattainable goal (like the unattainable object of desire or as the Lacan said Objet petit a). Maybe people will figure this out in the future or may consciousness will turn out to be just a mirage. But I sure like to enjoy the journey towards the answer.

Consciousness in AI is a topic which is argued by not only computer and cognitive scientists, but also philosophers. Philosophers like John Searle and Hubert Dreyfus have argued against the idea that a computer can gain consciousness. For an example, arguments like Chinese Room have been proposed against the idea of strong AI. But there are also philosophers like Daniel Dennett and Douglas Hofstadter, who have argued that the computers can gain consciousness.

Although there are debates about the how to create a conscious machine, for this article I choose to look at the creation of machine consciousness in another way.  Do we have to design the AI’s architecture with the conscious from the beginning to make a conscious AI? Will the AI be able to gain consciousness of its own? Or will the consciousness be emerged form when the AI’s architecture when it gained sufficient enough complexity by evolution or by self modification without human interference?

Consciousness without Human Design

Although consciousness is an important quality, defining it clearly is a somewhat difficult task. But we can roughly define it with two main components, the Awareness (Phenomenal Awareness) and the Agency. Awareness is ability to the external world and also feel or sense the content of the own mind. And the Agency is the control over external world and also the control over our self or the mental states. Which means the control the both behavioral (control external organs, hands, feet, etc.) and mental aspects. We should also be aware of the control  it to become conscious. We should know/feel that we have the control (or that we are doing it). The actions we are not aware like beating of the heart, breathing or things we do without thinking  (for example, walking or driving without thinking or concentrating on it or thinking about something else) aren’t taken as conscious actions. So after including all of these, we can define the consciousness as (or at least I’m using this definition for this article) the awareness and control over external objects and also awareness of ones own mental content. Another way of putting it is having a sense of self-hood.

According to the above definition of consciousness, we can see that the concept self is also linked with the consciousness. So, what is self? The self can be defined  as the representation of  one’s identity or the subject of experience. In other words self is the part that receives the experiences or the part that has the awareness. The self is an integral part in human motivation, cognition, affect, and social identity.

Concept of self may not be a something that we are born with. According to the psychoanalyst Sigmund Freud, the part of the mind which creates the self is developed later in the psychological development of the child. In the beginning a child only has the Id. Id is a set of desires which cannot be controlled by the child and only seeks pleasure (Pleasure Principle). But later in the development process a part of the Id is transformed into the Ego. And this Ego creates the concept of self in the child. Now the question will be, Can AI be developed into a stage where it can also create something like Ego like the human mind? If the AI has a structure which contains the necessary similarities to a human mind or the AI has an artificial brain similar to the human brain and nervous system, then AI may be able to undergo a process which create some sort of an Ego similar to human Ego. And for humans this Ego is created because of the interactions which a child has with the external world. So like that, maybe the influences which AI faces can trigger the creation of the Ego in AI.

According to the theory of Jacques Lacan the process of creating the self of a child happens in the stage called the mirror stage. In this stage the child (in 6-18 months of age) sees an external image of his or her body (trough a mirror or represented to the child  through the mother or primary caregiver) and identify it as a totality. Or in other words the child realized that he or she is not an extension of the world, but a separate entity from the rest of the world. And the concept of self  is developed through this process. So, can an AI go through this kind of a process or a stage and develop a self? Regardless of whether the structure of the AI is similar to a human mind or not, the realization of the fact that it is a separate individual from the first time will be a new and  revolutionary experience to AI (if the AI is sophisticated enough to process that kind of realization or experience in a proper way). So this kind of an experience may be able to make a change in the AI which may be able to give the AI an idea about self. But if this stage of AI is similar to the mirror stage, then the AI must also have a way of seeing its own reflection in order to undergo this kind of a process. If the AI has a body (robot, maybe) and doesn’t extend beyond that body then this won’t be a problem. But if the AI can be copied into new hardware or extend itself through a network or hardware, then defining its boundaries can be somewhat difficult. So seeing it as something that is not fragmented and has clear boundaries will a bit tricky. But if the architecture of the AI may allow a different way of defining boundaries and see it as an individual then this would work.

When we consider the other animals, we can see that an animal must have a certain complexity to have the self awareness (or consciousness). Methods like Red Spot Technique have shown that animals like some species of ape and dolphins have shown self awareness and some animals haven’t. So we can assume that AI must also have an architecture with sufficient enough complexity for it to develop a consciousness. So at some point in the process of evolution,  the AI must be able to achieve the necessary complexity, in order for the  AI to become conscious. But if the evolution of the AI is similar to the evolution process in Darwinian theory, then the AI which finally achieve the consciousness won’t be the ones that the process of evolution begins with because the new generation of AI is built by merging the best architectures of the old generation of AI and mutating them. So for this merging and mutating process the AI may need human assistance.

But a single AI also can undergo a sort of an evolutionary process of its own. And such process would be self improvement or more precisely recursive self improvement. Recursive self-improvement is the ability of an AI to program its own software or add parts to its structure or architecture (maybe  hardware vise too). So this process also will be able to make the AI achieve necessary complexity in some point.

Like that, maybe the AI will be able to produce consciousness through self modification, or through  a stage in their psychological development process by themselves without humans specifically designing it to be conscious from the beginning.


Beauty is a well known quality we perceive that needs no introduction. From dawn of humanity, humans have found, enjoyed, described, analyzed and even created beauty. Humans see beauty in many things, from the softness and the tenderness of a flower to the bright orange color of a sunset and from a smiley face of a woman to a poem or a painting about her.

However, in this article I like to talk about another aspect of beauty. Can beauty only be perceived by humans? Can another intelligent and conscious being (AI) perceive beauty? Do these beings perceive beauty in the same things as human?  These are the questions that this article tries to discuss.

Philosophy of Beauty

The nature of beauty is a widely talked theme in western philosophy. It has been taken in to argue even in ancient Greek philosophy. In fact, it is one of the fundamental aspects in the philosophical aesthetics.

The most popular question about beauty is the question of whether beauty is a subjective or objective quality.  Most early philosophers like Plato or Aristotle believe that beauty is an objective quality.  But later around 18th century, philosophers like David Hume argued that the beauty is a subjective experience. But though the beauty can be considered as a subjective quality we can see that most of the things that we see as beautiful tend to be common or has common qualities. But these commonalities do not necessarily prove that beauty is objective ether.

Evolutionary Aesthetics

Evolutionary aesthetics suggests that the basic aesthetic preference of humans has evolved in order to enhance survival and reproductive success.  According to this theory the factors like color preference, preferred mate body ratios, shapes, emotional ties with objects, and many other aspects of the aesthetic experience can be explained though theories of evolution.

For example, for the humans’ aesthetic preferences to landscapes can be a concept developed by humans through evolution, which helps humans to selecting a good habitat to live in. Also in the same way the beauty of the human body is connected with the human reproduction. Even art forms like music can be considered as products of evolution. The field of  Evolutionary musicology studies this relationship of music perception and production and evolutionary theory.

So, according to this theory the reason for seeing beauty in certain  things can be a concept which is hardwired into our brains by evolution. And this theory can explain the commonalities in the objects that each person sees beauty in.

Beauty according to AI

If we consider beauty as an experience that intelligent and sentient beings perceive, then the strong AI which is sentient and intelligent like humans may be able to perceive it too. But this may not work if the perceiving of beauty is only a quality of human intelligence and AI have another type or a mechanism of producing intelligence. Although may be the type of intelligence (or sentience or consciousness may be) that AI posses will be able to create their own concept of beauty (or something similar to the concept of beauty) according to the mechanisms it have. 

For humans, beauty can have a strong relationship with emotions. This may also be applied to AI. But the requirement here is that AI must be capable of producing emotions. And that will depend on the architecture and the evolution process of the AI’s mind. So what will be the emotions of AI? If the AI has an architecture that is similar to humans, then like humans the AI will also have set of desires. And these desires can generate more complex emotions. So now this becomes the question of AI’s desires. For humans, desires are about the survival or attending the basic needs (ex: food). So for AI their desires will be about their needs. When they have these desires more complex emotions will be constructed around these desires depending on their mental architecture. But whether humans can understand these emotions is a bit of a question. Even we can understand the desires of AI, it will be little hard to understand the emotions of AI since they are more complex than basic desires. And that makes it harder to understand their concept of beauty. And the things that they see as beautiful may not be beautiful to us. Also, the things that we see as beautiful may not be beautiful to them.

As I explained earlier in Evolutionary Aesthetics section beauty can also be a concept we developed through evolution. This same principle can also be applied to the AI. AI’s perception of beauty can be a concept hardwired into the AI’s brain, which they developed through their evolutionary process. If this is so, in order to understand the AI’s concept of beauty we should be looking into the evolutionary process of the AI. If the evolutionary process of AI is complex and fast, then the AI’s concept of beauty can be more complex than humans . Other than that if AI has the ability of self-modification or control over their evolution, that also will make their concept of beauty more complex.  Their perception of beauty and also their Art forms would be more complex and will even be based on different principles than humans due to this difference in evolution and also because of their increased intelligence .

Beauty can also be a planted or hardwired idea in the AI’s brain by humans. If this is true, then the AI will see the beauty in the same things that humans see the beauty in (or they will see the beauty in what we want them to see beauty in). The current attempts in AI research to give the AI the ability to create art is kind of doing the same thing. But I think the AI  will alter or change these ideas and make their own ideas about beauty. They won’t necessarily change or remove the planted ideas. But the ideas can evolve according to the evolution of the AI’s mind. Also the AI sometimes doesn’t have the same conditions as the humans to see beauty in a certain thing. For an example, we can see the beauty of a woman and AI can have a planted idea of the beauty of that woman. But we cannot accept that the AI has the same attraction (may be biological) towards that woman like us humans.

So like this the concept of beauty in AI can be a bit different than the human concept of beauty. But I think that they can evolve into perceiving beauty in their own way. And I think that is truly beautiful.