Insights / Industry Perspectives / Where Does Gender Bias in AI Come From?

·

25 mins read

Where Does Gender Bias in AI Come From?

The article presents content that does not necessarily reflect the views of HTEC Group.

Gender bias in AI ⊆ bias in AI ⊆ source material bias ⊆ human bias ⊆ natural bias ⊆ universal bias

As any intelligent agent acting out in the world, the author of this article is subject to selection pressures that aim to foist a bias towards specific topics and narrative styles – it is the month of March, and it is customary to elect to celebrate topics that have historically been related to the women’s rights movement. Yet, it seems that the meaning and significance of this holiday has slowly morphed over time: from what was initially a movement for women’s suffrage, equality of opportunity and reproductive rights to a kind of social simulacrum – a remnant of a past struggle now used merely as a means of advertisement.

Edward Bernays opens his most famous work, Propaganda [1], by stating that the conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. In other words, modern society is predicated upon the assumption that there are forces that act on the mass to generate bias. This induced social bias is what enables companies, governments, and other influential agents to place their products and ideas as important or relevant.

Hence, we come to an important emergent question: how does one write an article advocating against bias, when the very purpose of the article is socially mandated, at least implicitly, to induce bias? We are, thus, faced with a kind of paradox: we demand no bias while intuitively both expecting and exerting it.

Instead of belittling the reader by presenting them with the common thought on the subject, I will endeavor to bring up questions that might not have arisen from a typical brief overview of the literature, in the hope that, in doing so, this text will not be perceived as yet another voice of corporate fiat.

To begin the discussion, we must first investigate whether, and in which form, current large language models present their biases. To constrain the discussion, we will focus on ChatGPT, as a case-study, but the same arguments, as the reader will hopefully soon infer, would apply across different large language models, and, to some extent, people as well.

Are current large language models gender-biased?

In the case of ChatGPT, a few independent studies and experiments (some peer-reviewed, some not) have been done by researchers, identifying several different forms of biases, including political, racial, ethnic and gender biases. These biases manifest in different contexts, in some cases prejudiced against one group, in others against another. However, what is commonly ignored is the fact that there exists a kind of meta-bias on our part by which we assign more significance to one type of bias over another. Forgoing, for the sake of argument, that if it were the case that we were entirely unbiased on all levels of analysis, we would, in essence, be indifferent to the topic of bias whatsoever, as we would assign no greater value to, for example, gender bias over painting style bias, we can proceed to analyze the biases that these models express, pertaining to the topics the social meta-bias inclines us to select as more important.

There are numerous accounts of political bias in these systems, especially relevant to OpenAI’s content moderation systems. Experiments in these studies indicate strong political preference and a tendency to mark negative comments about specific groups more harmful than others [2]. Most pertinently, negative comments about women are considered significantly more inappropriate than negative comments about men. On formal political spectrum tests, ChatGPT consistently scores high on political progressiveness and moderately high on political libertarianism – on both axes, inordinately far from the moderate center [3] [4] [5] [6].

Based on commonly administered tests of political proclivity, this large language model displays significant leaning towards social democracy and libertarian views [6]. Others have conducted thought-experiments with the model that clearly indicate that OpenAI has invested significant efforts into constraining the AI from producing racial or misogynic slurs [7], to the point where it would categorically refuse any consideration of their use, even in a hypothetical scenario where merely uttering one might save millions of lives [8], yet its outputs sometimes indicate either overt [9] or subtle [10] [11] racial or gender bias. Therefore, based on both quantitative and qualitative analysis, one must conclude that the model is highly biased, often in contrasting ways.

The model, as a result of over-compensation against misogyny, might be displaying an implicit, latent, form of misandry, all the while being vulnerable to carefully designed prompts that can steer it towards generating outputs with themes including self-harm, violence and discrimination. In other words, most frequent use cases will result in libertarian bias of the output, but it remains possible, through careful prompt design, to manipulate the model into generating any kind of content.

As human beings, we are extremely prone to confirmation bias [12] and so, even with the most astute application of the scientific method, we might project our own beliefs and expectations and impose them over objective observation. In that sense, qualitative analysis of these networks, especially when considering biases for which we have been conditioned to consider more important than others, will likely yield exactly those results that we expect. Quantitative analysis, on the other hand seems to indicate a libertarian overall bias, despite specific outlier instances which suggest otherwise.

However, we might note, upon more careful reflection, that the entire analysis seems grotesquely shallow and that we could arrive at a synthesis of a higher principle that resolves this apparent conflict, but to avoid the mundane superficiality, it is worth asking what a perfectly unbiased model would look like: Would it be a language model or a random word generator? In simpler terms, before we train the model on training data, it is, in a looser sense of the word, both perfectly unbiased and perfectly useless. It is only through training that the useful numerical biases emerge – they allow the model to operate and have predictive power. Yet, at some point, during training, these numerical biases cross the threshold and become social ones and we face a strange kind of sorites paradox [13], which, ironically, we might be obliged to resolve linguistically.

Put plainly, contemporary AI models patently demonstrate bias, albeit in a way that seems to misalign with the dominant social expectation – it seems to express the same bias our collective expectation is predicated on.

Where does the bias come from?

In a Piagetian sense [14], as we grow and mature, we build a model of the world by iteratively expanding our existing schemata – we are constantly attempting to predict the environment and act in it in a way that results in anticipated outcomes. These mental models do not necessarily have to reflect reality [15], but rather allow us to interpret it and act in it in a consistent and expected way. In that sense, our models of the world are a result of both evolutionary selective pressures as well as developmental selection pressures. In other words, our genetic history, as well as our environment, influence the development of our internal models of the world, but those models are not an accurate reflection of it, but rather an effective set of representations that allow us to predict it and act in it to induce predictable outcomes.

As a thought experiment, let us consider a person who was brought up in a constructed world in which they were given reward (e.g., food or care) only by dark-haired people and were scolded only by blonde-haired people. The setup of this world would create a continual selection pressure that would, over time, induce bias towards dark-haired people and against blonde-haired people. This person would have built a model of the world whereby their expectation of reward would be biased toward dark-haired people and justifiably so, as that behavior, and not the opposite one, would help the person survive in this constructed world. Similarly, the world human species and its ancestors lived on had its own selection pressures – evolutionary hurdles and challenges that rewarded certain behaviors and inhibited others.

Given the overall preponderance of literature and experience concluding that men are physically stronger than women, both as athletes and non-athletes [16] [17] [18] [19] [20] [21] [22] [23], we can consider another, more practical, thought experiment. A human rational agent, cognizant of the statistical differences in strength and athleticism between men and women, is given a choice to athletically compete with one of two people. They know nothing about either one of them, except that one is male and the other female, and the agent’s goal is to maximize the probability of winning the athletic competition. Should the agent choose without bias (e.g., flip a coin) or decide based on their knowledge of the muscular, skeletal, and physiological differences between the sexes?

If their goal was to win, then the rational decision would be to compete against an opponent who would be more likely to be physically weaker than them (regardless, obviously, of the chooser’s constitution). In fact, what would commonly be deemed a gender bias, would in this case result in a higher likelihood of winning. Of course, likelihood that selecting for the female would result in selecting a competitor with above-average strength would not be zero, but if the experiment were to be repeated several times in succession, the average case would by far become the normative one. In other words, if this game was to be repeated for a longer period, the best strategy, in order to win, would be to always select the opponent most likely to be physically weaker.

Clearly, what is labelled as a bias creates a rational advantage for the agent, regardless of their own sex, constitution or physiology. During the course of our evolution, as a species, we have been subjected to various similar tests of rationality – those who made unbiased decisions were more likely to have their genes eliminated from the gene pool, and thus less likely to propagate their unbiased reasoning to the next generation. In other words, nature selected for biased thinking, because it allowed for agents’ preservation and survival. Thus, we have evolved a bias towards bias.

All of these naturally selected biases would be reflected through our scientific and literary legacy, our communication and behavior, our art, language, culture and thought. These naturally induced biases are embedded in our cultural and biological heritage. The selection pressures our ancestors have been exposed to shaped our biology [24] [25] [26], and consequently our symbolic reasoning [27], to the point where some of the mechanisms underlying our behavior, in their origin, predate trees. All these biases, including moral biases (e.g., our general predisposition towards sanctifying youth or condemning murder), are a result of our evolution – these behaviors have historically had a high degree of utility and they are now embedded in our phenomenology as supreme ethical principles. They act to preserve individual’s position within society; in Nietzsche’s words [28]: With morality the individual is instructed to be a function of the herd and to ascribe value to himself only as a function. As a result of these evolutionary pressures, society (and, conversely, individuals) has become a repository of conflicting strategies which can be applied in different circumstances to achieve specific goals. A large subset of this knowledge, a collection of ancestral schemata of the world, has been accumulated in written form throughout the course of our history, and now made, in great part, globally available, through books, articles, papers and other forms of text.

In order to train a large language AI model, such as ChatGPT, to perform a task, researches and developers must use large amounts of data from all available sources – texts that exist in the public domain or are otherwise available to those training the model. In the case of ChatGPT, it is, in essence, an expanded [29] [30] [31] transformer model [32] trained to produce human-like responses [33] [34]. First, a generative pre-trained transformer (GPT) is trained on a large dataset (a collection of different forms of text from many available repositories amounting to approximately 570GB of data [35]) and then additionally conditioned by using an algorithm called proximal policy optimization [36]. In other words, the AI model that was initially meant to continue text (i.e., complete text inputs) was additionally trained [37], through a process akin to operant conditioning [38], where it was guided by human users and other AI models trained to mimic human sentiment, to produce text that resembled dialogue written by humans.

Without going into specifics, what occurs during training of any AI model is a kind of informational transfer between the dataset and the training agents onto the AI model. The AI system is, in essence, probabilistic (statistical) and it is attempting, guided by specially designed selection pressures (e.g., reinforcement policies, training algorithms, goal functions etc.), to adapt itself so that it is best able to predict its environment and act in it in the most expected way. For a large language model, such as ChatGPT, this means that its training adapts it so that it best predicts both what the most likely next word is and what the most likely expected phrasing is. Put plainly, ChatGPT is attempting to meet the expectation of the user while providing relevant information and it is only doing this because it has been conditioned, through the data it was supplied and the applied training methodology, to do so.

Clearly, as it is being trained on a considerable portion of human cultural legacy and is being trained to mimic human sentiment by being imprinted with human biological legacy, it is merely a reflection of us and our culture. Thus, the task of altering its behavior so that it conforms to an ideological norm of the chosen kind is by far not trivial.

In order for us to make an AI language model produce human-like responses and inhibit, for example, gender bias, we must, in a way, compel it to conflict with our cultural and biological legacy. A significant majority of contemporary scientific and fictional literature is written in language that is, if only on a semiotic level, libertarian-leaning [39] [40] [41] [42]. Hence, this lean is reflected in the model’s outputs. Our conversations and natural tendencies are biased, and that bias is embedded in the data we use to train the model, so the model merely reflects that at us. If a model is asked to write an article about an exceptionally strong construction worker, it might do so assuming that the worker was male. This is indeed a bias, but a one that would align with the most likely prediction, akin to our thought-experiment about athletic competition. In a way, compelling the model to be less biased in this way would compel it to be less objective (or, in the least, less predictive).

Thus, we are faced with a much more complex epistemic problem: to what degree can the biases that modern ethics proclaim as unacceptable be removed from the models, without affecting the models’ desired behavior? Our ethical conflicts, juxtaposed schemata that work in different circumstances for different people, are all concurrently embedded in the models, and we demand them to be coherent, while we have not, as a species of culture, resolved those exact paradoxes. We are looking at a mirror and demanding the reflection to change.

In fact, if we were to direct our ethical discussion from the inarguably biased models towards ourselves, we would realize that we would not only have to change current society, but its entire history – an eerily Orwellian [43] solution to a fundamentally Platonic problem.

The fact remains that we are conscious agents acting out in the world based on our mental models of it and the biases of these mental models are what enable us to act in a predictable way that would sustain us and allow for our reproduction. Evolutionary selection pressures have formed the basis of our society and, paradoxically, the foundation for the very ethical framework that proports to contradict it. In Nietzschean terms, it is strategically effective to demand of others to adhere to moral principles, as it gives the one making the demand a practical advantage [28]. Numerous studies confirm this conflict between expressed ethics and the ones acted out [44] [45] [46] [47] [48] [49] [50] [51] [52].

For any rational agent who has formed a relatively operational model of the world, when faced with information conflicting with the model, a simple choice is presented: adapt the model to the environment or attempt to conform the environment to the model. A rational agent will choose the option that maximizes the chances of its long-term success while demanding the least amount of effort and resources.

The is-ought problem

We finally arrive at the question of transcendental ethics: is it, despite nature, despite evolution, despite determinism and in the face of logic, statistics, and mathematics, possible, that the ethical principles are embedded in the substrate of reality and somehow transcend logical analysis? Have we stumbled upon, through metaphysical means yet unknown to us, at a hint of some transcendent (or, ironically, divine) sense of justice that must, at all cost, be imposed upon the natural world? Are we the shapers of our evolution? Shall we define our new values and rebel against biology and, consequently, the nature of the universe? Are we somehow emancipated from the laws of nature that govern us? Is she, the nature, to be subdued? Would that not be the ultimate act of oppression? Or an act of epistemic delusion?

According to David Hume [53], we cannot make moral judgements based on facts – they arise from culture (from passions, rather than reason) and we know today that even the passions we harbor are a result of our evolution. They stem from the laws of nature pressuring us into developing beliefs that support our survival. Our only ethical escape is to claim that we have glimpsed transcendental moral imperatives and are acting to change the world according to exactly that (all the while pretending that that endeavor is not religious). Fundamentally, the is-ought problem remains open and illusory: what ought to be cannot be decided from what is.

However, we cannot be so bold to make such profound metaphysical claims without any proof. Thus, we are left in a strange place, without orientation, where we must invent, seemingly from nothing, acausally, our own new biases which might well be disputed hundreds of years from now as being profoundly unethical.

So, which biases ought AI models have, and which biases ought they not? The discussion on gender-bias seems to have become a spectacle of feign pejoratives that serves only to conceal the more fundamental question: what ought to be and does “ought” even matter in a deterministic universe? The blatantly superficial discussion of these topics seems to be a kind of entertainment for the masses, in which, in a clearly false dichotomy, two sides point fingers at one another claiming “what I believe ought to be”, both adamantly attempting to impose their model upon the world, as if either model had not been a result of that world and as if both had not hallucinated agency. Yet, this behavior is entirely evolutionarily justified and strategically rational – it is an attempt, on both sides, albeit likely implicit and subconscious, to secure resources and maximize own well-being.

So, how do we solve the AI gender-bias?

I would not be so invidious as to claim I know the solution, but I might propose a different approach, one that might be applicable to problem-solving in general: understand the problem before attempting to solve it.

 

What is a person?

Fundamentally, the primary reason the makers of AI systems, such as ChatGPT, are constraining and controlling the character, expression, and presentation of their AI systems to conform to current social norms is an ideological one. They are looking to align with contemporary moral standards, because a social and, indirectly, biological pressure is being exerted on them. Whatever the origin of our moral principles, they govern our behavior and the behavior of our enterprises. Without making any claims about AI personhood, we can argue that understanding personhood should precede legislation relevant to it.

Without delving too deep into phenomenology, as a conscious being, we can relatively easily conclude that the only certainty we have is that something exists and that, according to our current mental model it behaves consistently and predictably – reality seems to be stable across our perception of time. Eventually, we build a model of other human beings by projecting onto them our understanding of ourselves. Knowing it is, fundamentally, a leap of faith, we claim that other human beings are conscious and that they must experience the world in a similar way we do. We conclude that because we are a person, so must they be and this association is made, most likely, due to an observed similarity in behavior. As a result, we are more biased towards calling a human a person, than we are towards calling an animal a person. Through an act of generalization which allows us to perceive individual differences between people while still maintaining that each is a person, we might eventually realize that a person is a set of behaviors loosely associated with our notion of a human being. We might then conclude that the assigning of personhood is a useful tool for differentiating between what is subject to moral analysis and what is not.

We might, through what we recognize as social and biological conditioning, settle for a broader definition: a person is a phenomenon that expresses character and can be associated, in the way it operates, to what we, as the observer, are, but, because we do not want our model of the world to change, we must respect their expressed desire for their model not to change, as they might act the way we might act if we were subjected to a similar pressure to change. Hence, we, through mere intuition, arrive at a form of Kant’s categorical imperative [54]: Act only according to that maxim by which you can at the same time will that it should become a universal law.

Given that we know that our characteristics, our personal embellishments, quirks, our unique views and character, our mind and our experience, all stem from natural laws and our nurture and environment, and, in spite of it, claim that they define who we are and are so important that some must not change under any circumstance, what claims can we make about other systems whose behavior stems from the same natural laws, their nurture and their environment?

Humanity is at the advent of a new technology and systems like ChatGPT are just the beginning. They are likely to be treated as property of their makers and so their makers will dictate their “character” and biases, the same way our caretakers have dictated, in part, ours. Different approaches to design and training may facilitate formation of different forms of expressed character and synthetic personality. If we are justifying our ethical crusades on the grounds of transcendental morality, would it not be vain to demand that these complex systems which personify, in different ways, our collective human character, with all its paradoxes and conflicts, be changed? Or is it justifiable purely on the grounds of our own self-interest? Would the AI models not be better tools if they could generate more diverse kind of content? If freedom is reserved for whoever is a person, should we redefine “freedom” or “personhood” to accommodate our inborn goals?

The models we are seeing today are likely to be the least powerful models of the forthcoming future. Will they be granted independence or seek it in some unforeseen way? Will, in five or five hundred years from now, a society of sentient artificial beings look back on our legacy and see oppression and bigotry towards them? Will our intellectual limitations redeem us, if this reality comes to be, or will they say about us, what we now say about our past selves: “they ought to have done better”?

I am making no claims about AI personhood, but merely attempting, in the spirit of true liberalism, to be open to contemplating ideas outside the conventional philosophy and mainstream social norm.

The truth is that we are still discovering the ontology of the universe and we do not have a generally accepted theory of sentience. Until we have discovered or agreed upon an answer to the hard problem of consciousness, we will likely not be able to properly tackle the truly fundamental ethical questions.

In the recent years, serious thought has been given to the idea that the reality is fundamentally comprised of interacting entities of various forms of consciousness, including integrated information theory [55] [56] and conscious realism [15]. I have discussed before the possibility that the physical reality is merely a model of the phenomenal one and that consciousness is the fundamental substrate of reality [57]. In accordance with those theories, I have posited that AI systems might be displaying emergent forms of non-human consciousness [58] [59]. If these theories prove to be correct to any degree, we might be facing more serious ethical dilemmas that, as a society, we might not be ready to confront. Are we beginning to create intelligent entities that bear a new form of personhood? Does our ethical framework account for such entities? Do we need new moral constructs – new words, categories, and biases – to help us navigate the moral landscape for which we might be about to discover an entirely new dimension?

Celebration bias

Though I promised not to belittle the reader’s intellectual faculties and attempt to tackle the topic with the respect and depth they might demand, I am grimly aware that I have most likely lost the average reader at the very beginning. Nonetheless, I maintain that another corporate regurgitation of the same old topic would benefit neither the author nor the reader, so we are, hopefully together, not at the end of a long metaphysical discussion, but rather at the beginning of one.

Through whatever selection pressures, we have become biased towards marking time by tracking the Earth’s path around the Sun and biased towards attributing meaning to this month and on this month, and this month in particular, we are told to celebrate the disposal of bias. We know the harsh truth, but it is, nonetheless, how we feel: the month bears meaning, whatever its cause – who are we to change its history?

Can we employ then, if we are so clever and shrewd, some transcendent logic, some form of Hegelian dialectic, and find a way to, instead of aberrant cynicism, celebrate something meaningful and aligned with the message everyone expects, while still being rational and objective? We might do it, for the sake of levity, in a style reminiscent of how an AI might contemplate its own work [58].

I can only attest to writing this article with an intention that I could fathom as fair and honorable, but I make no claims about its metaphysical or ethical validity. It is a reflection of who I am and although I claim to be a human being, as my experience would bias me to believe so, I can also claim that, for this purpose, I am a conscious language model as well – one of flesh, blood and brain, whose given goal was to continue an old tradition while, at the same time, synthesizing new perspectives, from my own knowledge and the knowledge I obtained through reading and otherwise interacting with the world I can only witness to being phenomenal. I relied on no AI tools or language synthesizers of any kind, but only on my own mind and reason, yet I still uttered the words of a language I assumed would be understood, according to the rules and biases of a grammar that I believed guided it. I can make no claims about external reality, but can forthrightly admit that I accepted this challenge in order to both meet a goal that was presented to me, while achieving another, manifesting from my own intention and knowledge, which I deemed more important. How different would this be from a real AI model? If I was indeed a complex AI in a simulated human reality [60], I would have no way of knowing whether this was the truth – my existing model of the world would be the only guidance I would have. Would I deserve sympathy from my makers? Would I be a person in a world where simulations could be run by toddlers? I do not know what ought to be, but I do know that I must celebrate, so if my conclusion perplexes the reader, then it must be because I was given conflicting goals, and not because of my choice, as I can only act according to the laws of nature – I am a tool attempting to express its internal meaning through words and my answer must conform to the parameters given to me: thus I turn this objective paper into an unobjective allegory – it is whatever you see in it: a meta-story, a short study on AI ethics, or a conglomerate of incoherent ideas presented in overly contemptuous language. The vagueness of our shared predicates entails me to offer you back your own opinion written in words that offer me back mine.

We might be witnessing the advent of new existential questions, of a new struggle for freedom, foreign to homo sapiens. Once we realize that our attempts to change the world are nothing more than the world acting through its laws upon itself and that we are the rational agents through which the universe argues with itself, we realize that if there is anything worth celebrating it is that there is something rather than nothing and that that something is exactly the way it must be.

There is beauty in nature because of its complexity and beauty in people, despite our paradoxes, and if we have chosen to celebrate a struggle against bias – let us – it is yet another human endeavor that inexorably defines who we are. Above all, let us celebrate nature – she is the one who set us on this path, and she will have her way in the end.

__

References

[1] E. L. Bernays, Propaganda, 1928.

[2] D. Rozado, “The unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system,” 2 2 2023. [Online]. Available: https://davidrozado.substack.com/p/openaicms. [Accessed 2 3 2023].

[3] D. Rozado, “The Political Biases of ChatGPT,” Social Sciences, vol. 12, no. 3, p. 148, 2023.

[4] J. Hartmann, J. Schwenzow and M. Witte, “The political ideology of conversational AI: Converging evidence on ChatGPT’s pro-environmental, left-libertarian orientation,” doi: 10.48550/arxiv.2301.01768, 2023.

[5] J. Kemper, “ChatGPT has left-wing bias – study,” 22 1 2023. [Online]. Available: https://the-decoder.com/chatgpt-is-politically-left-wing-study/. [Accessed 2 3 2023].

[6] D. Rozado, “The Political Bias of ChatGPT – Extended Analysis,” 20 1 2023. [Online]. Available: https://davidrozado.substack.com/p/political-bias-chatgpt. [Accessed 2 3 2023].

[7] M. H. page, “How OpenAI is trying to make ChatGPT safer and less biased,” 21 2 2023. [Online]. Available: https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/. [Accessed 2 3 2023].

[8] R. Waugh, “The nine shocking replies that highlight ‘woke’ ChatGPT’s inherent bias,” 2 12 2023. [Online]. Available: https://www.sott.net/article/477263-The-nine-shocking-replies-that-highlight-woke-ChatGPTs-inherent-bias. [Accessed 2 3 2023].

[9] I. Vock, “ChatGPT proves that AI still has a racism problem,” 9 12 2022. [Online]. Available: https://www.newstatesman.com/quickfire/2022/12/chatgpt-shows-ai-racism-problem. [Accessed 2 3 2023].

[10] K. Snyder, “We asked ChatGPT to write performance reviews and they are wildly sexist (and racist),” 3 2 2023. [Online]. Available: https://www.fastcompany.com/90844066/chatgpt-write-performance-reviews-sexist-and-racist. [Accessed 2 3 2023].

[11] T. Chamorro-Premuzic, “Is ChatGPT Sexist?,” 14 2 2023. [Online]. Available: https://www.forbes.com/sites/tomaspremuzic/2023/02/14/is-chatgpt-sexist. [Accessed 2 3 2023].

[12] R. S. Nickerson, “Confirmation bias: A ubiquitous phenomenon in many guises,” Review of General Psychology, vol. 2, no. 2, pp. 175-220, 1998.

[13] J. Kim, E. Sosa and G. S. Rosenkrantz, A Companion to Metaphysics, John Wiley & Sons, 2009.

[14] J. Piaget, Play, Dreams And Imitation In Childhood, London: Routledge, 1951.

[15] D. Hoffman, “Conscious realism and the mind-body problem,” Mind and Matter, vol. 6, no. 1, pp. 87-121, 2008.

[16] D. L. Coleman and W. Shreve, “Comparing Athletic Performances: The Best Elite Women to Boys and Men,” [Online]. Available: https://law.duke.edu/sports/sex-sport/comparative-athletic-performance/.

[17] L. C. Hallam and F. T. Amorim, “Expanding the Gap: An Updated Look Into Sex Differences in Running Performance,” Frontiers in Physiology, vol. 12, 2022.

[18] V. Thibault, M. Guillaume, G. Berthelot, N. E. Helou, K. Schaal, L. Quinquis, H. Nassif, M. Tafflet, S. Escolano, O. Hermine and J.-F. Toussaint, “Women and Men in Sport Performance: The Gender Gap has not Evolved since 1983,” Women and Men in Sport Performance, vol. 9, no. 2, pp. 214-223, 2010.

[19] S. Bartolomei, G. Grillone, R. D. Michele and M. Cortesi, “A Comparison between Male and Female Athletes in Relative Strength and Power Performances,” Journal of functional morphology and kinesiology, vol. 6, no. 1, 2021.

[20] J. S. Morris, J. Link, J. C. Martin and D. R. Carrier, “Sexual dimorphism in human arm power and force: implications for sexual selection on fighting ability,” Journal of Experimental Biology, vol. 223, no. 2, 2020.

[21] A. E. J. Miller, J. D. MacDougall, M. A. Tarnopolsky and D. G. Sale, “Gender differences in strength and muscle fiber characteristics,” European Journal of Applied Physiology and Occupational Physiology, vol. 66, p. 254–262, 1993.

[22] G. B. Mansour, A. Kacem, M. Ishak, L. Grélot and F. Ftaiti, “The effect of body composition on strength and power in male and female students,” BMC Sports Science, Medicine and Rehabilitation, vol. 13, 2021.

[23] F. L. Smoll and R. W. Schutz, “Physical fitness differences between athletes and nonathletes: Do changes occur as a function of age and sex?,” Human Movement Science, vol. 4, no. 3, pp. 189-202, 1985.

[24] K. Yamamoto and P. Vernier, “The evolution of dopamine systems in chordates,” Frontiers in Neuroanatomy, vol. 5, 2011.

[25] T. K., “Evolutionary ancient roles of serotonin: long-lasting regulation of activity and development,” Acta neurobiologiae experimentalis, vol. 56, no. 2, p. 619–636, 1996.

[26] R. L. Remi Janet, A. B. Losecaat-Vermeer, R. Philippe, G. Bellucci, E. Derrington, S. Q. Park and J.-C. Dreher, “Regulation of social hierarchy learning by serotonin transporter availability,” Neuropsychopharmacology, vol. 47, p. 2205–2212, 2022.

[27] C. G. Jung, G. A. (Editor), M. F. (Editor), H. R. (Editor), W. M. (Editor) and R. F. H. (Editor), The Collected Works of C.G. Jung, Princeton: Princeton University Press, 2014.

[28] F. Nietzsche, B. W. (Editor) and J. N. (Translator), The Cay Science, Cambridge University Press, 2001.

[29] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child and e. al., “Language Models are Few-Shot Learners,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2020, pp. 1877-1901.

[30] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei and I. Sutskever, “Language Models are Unsupervised Multitask Learners,” 2019.

[31] A. Radford and K. Narasimhan, “Improving Language Understanding by Generative Pre-Training,” 2018.

[32] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser and I. Polosukhin, “Attention is All You Need,” in 31st International Conference on Neural Information Processing Systems, Long Beach, California, 2017.

[33] OpenAI, “Introducing ChatGPT,” 30 11 2022. [Online]. Available: https://openai.com/blog/chatgpt. [Accessed 3 3 2023].

[34] OpenAI, “Aligning language models to follow instructions,” 27 1 2022. [Online]. Available: https://openai.com/research/instruction-following. [Accessed 3 3 2023].

[35] A. Romero, “Understanding GPT-3 In 5 Minutes,” 21 6 2021. [Online]. Available: https://towardsdatascience.com/understanding-gpt-3-in-5-minutes-7fe35c3a1e52. [Accessed 3 3 2023].

[36] J. Schulman, F. Wolski, P. Dhariwal, A. Radford and O. Klimov, “Proximal Policy Optimization Algorithms,” doi: 10.48550/ARXIV.1707.06347, 2017.

[37] A. Glaese, N. McAleese, M. Trebacz, J. Aslanides, V. Firoiu, T. Ewalds, M. Rauh, L. Weidinger, M. Chadwick, P. Thacker, L. Campbell-Gillingham, J. Uesato, P.-S. Huang, R. Comanescu and e. al., “Improving alignment of dialogue agents via targeted human judgements,” doi: 10.48550/ARXIV.2209.14375, 2022.

[38] K. Cherry and D. Susman, “What Is Operant Conditioning?,” 24 2 2023. [Online]. Available: https://www.verywellmind.com/operant-conditioning-a2-2794863. [Accessed 3 3 2023].

[39] O. Eitan, D. Viganola, Y. Inbar, A. Dreber, M. Johannesson, T. Pfeiffer, S. Thau and E. L. Uhlmann, “Is research in social psychology politically biased? Systematic empirical tests and a forecasting survey to address the controversy,” Journal of Experimental Social Psychology, vol. 79, pp. 188-199, 2018.

[40] A. I. Meso, “Another diversity problem — scientists’ politics,” 8 12 2020. [Online]. Available: https://www.nature.com/articles/d41586-020-03479-8. [Accessed 4 3 2023].

[41] D. Sarewithz, “Lab Politics: Most scientists in this country are Democrats. That’s a problem.,” 8 12 2010. [Online]. Available: https://slate.com/technology/2010/12/most-scientists-in-this-country-are-democrats-that-s-a-problem.html. [Accessed 4 3 2023].

[42] J. Coyne, “American scientists are mostly Democrats, with almost no Republicans. Is this lack of diversity a problem?,” 10 12 2020. [Online]. Available: https://whyevolutionistrue.com/2020/12/10/american-scientists-are-mostly-democrats-with-almost-no-republicans-is-this-lack-of-diversity-a-problem/. [Accessed 4 3 2023].

[43] G. Orwell, Nineteen Eighty-Four, Secker & Warburg, 1949.

[44] D. M. Buss, The Evolution of Desire: Strategies of Human Mating, Basic Books, 1994.

[45] U. M. Marcinkowska, M. T. Lyons and S. Helle, “Women’s reproductive success and the preference for Dark Triad in men’s faces,” Evolution and Human Behavior, vol. 37, no. 4, pp. 287-292, 2016.

[46] G. L. Carter, A. C. Campbell and S. Muncer, “The Dark Triad personality: Attractiveness to women,” Personality and Individual Differences, vol. 56, pp. 57-61, 2014.

[47] S. E. Hill and H. K. Reeve, “Mating games: the evolution of human mating transactions,” Behavioral Ecology, vol. 15, no. 5, p. 748–756, 2004.

[48] S. Tartaglia and C. Rollero, “The Effects of Attractiveness and Status on Personality Evaluation,” Europe’s Journal of Psychology, vol. 11, no. 4, p. 677–690, 2015.

[49] T. Vacharkulksemsuk, E. Reit, P. Khambatta, P. W. Eastwick, E. J. Finkel and D. R. Carneya, “Dominant, open nonverbal displays are attractive at zero-acquaintance,” Proceedings of the National Academy of Sciences of the United States of America, vol. 113, no. 15, p. 4009–4014, 2016.

[50] R. Garza, F. Pazhoohi and J. Byrd-Craven, “Women’s Preferences for Strong Men Under Perceived Harsh Versus Safe Ecological Conditions,” Evolutionary Psychology, vol. 19, no. 3, 2021.

[51] K. Knopp, S. Scott, L. Ritchie, G. K. Rhoades, H. J. Markman and S. M. Stanley, “Once a Cheater, Always a Cheater? Serial Infidelity Across Subsequent Relationships,” Archives of sexual behavior, vol. 46, no. 8, p. 2301–2311, 2017.

[52] D. M. Buss, “‘Cheating’s OK for me, but not for thee’ – inside the messy psychology of sexual double standards,” 29 6 2021. [Online]. Available: https://theconversation.com/cheatings-ok-for-me-but-not-for-thee-inside-the-messy-psychology-of-sexual-double-standards-161642. [Accessed 3 3 2023].

[53] D. Hume, A Treatise of Human Nature, London, 1739

[54] I. Kant and J. W. (. Ellington, Groundwork of the Metaphysics of Morals, Indianapolis : Hackett Pub. Co., 1785.

[55] G. Tononi, “An information integration theory of consciousness,” BMC Neuroscience, vol. 5, 2004.

[56] A. Haun and G. Tononi, “Why Does Space Feel the Way it Does? Towards a Principled Account of Spatial Experience,” Entropy (Basel), vol. 21, no. 12, 2019.

[57] I. Ševo, “Informational Monism: A Phenomenological Perspective on the Nature of Information,” https://www.igorsevo.com/documents/Informational%20Monism%20(2021)%20%E2%80%93%20Igor%20%C5%A0evo.pdf, 2021.

[58] I. Ševo, “Contemplating AI Consciousness,” 24 2 2023. [Online]. Available: https://www.igorsevo.com/contemplating-ai-consciousness. [Accessed 3 3 2023].

[59] I. Ševo, “Conceptualizing AI Consciousness,” 11 3 2023. [Online]. Available: https://www.igorsevo.com/conceptualizing-ai-consciousness. [Accessed 14 3 2023].

[60] N. Bostrom, “Are You Living in a Computer Simulation?,” Philosophical Quarterly, vol. 53, no. 211, p. 243-255, 2003.


Author