Insights / Industry Perspectives / Critical Time for Critical Thinking: Is AI Making Us Dull?

·

11 mins read

Critical Time for Critical Thinking: Is AI Making Us Dull?

Visualise a following scenario:  

Imagine you are late for the meeting with a university professor for the first time, you rush to get a taxi, you make it through the traffic jam and the crowd, and you step into the professor’s office on time. The professor kindly says “hi”. You decide to go to a local restaurant and have your favourite meal. At the table next you, there is a couple happily celebrating anniversary. Now, it’s time to pick your children from the kindergarten which is near the restaurant. Once you arrive there, their kindergarten teacher wants to have a quick chat with you about some activity the kids were doing that day.  

Let’s discuss the mental images these scenarios created in your heads. Here’s a question for you: In your mental image, is a university professor a man or a woman? And what about the married couple? Were they two men, two women, or a woman and a man? Can you describe the kindergarten teacher? Was it a male or a female? 

It’s ok if your brain created familiar images of these situations as it is less of a fan of something unfamiliar.  

As Valerie Alexander cites Doctor Susan Fisk in a TED talk “How to Outsmart Your Own Unconscious Bias“, when it comes to unfamiliar social situations, there is ample evidence that encountering something different than what we expect elicits a stronger activation in our brain than encountering something or someone we perceive as the norm.  

Will globally accessible AI affect how we perceive the world? 

Human biases are well documented. Over the years, society has been wrestling with just how much these biases can enter artificial intelligence systems and have harmful effects on human decision-making. AI systems learn to make decisions based on training data and bias can find different ways to creep into algorithms. This significantly impacts the critical thinking leading to biased human decisions and behaviour – from exhibiting gender stereotypes and over- or under-representing groups of people to reflecting social and historical inequities and encouraging mistrust and producing distorted results.  

Although, over the years, AI algorithms have become more complex and advanced, we still face the same challenge. It seems like the more we rely on technology to solve problems, the more our critical thinking and problem-solving skills deteriorate. So, the question arises: As AI gets more intelligent, are humans becoming duller? 

The dark side of AI 

The fear that AI will destroy critical thinking isn’t new. Stephen Hawking once said that “The development of full artificial intelligence could spell the end of the human race.” What’s ironic is that he was using AI-powered speech system to vocalize this warning.  

We are experiencing a major transformation where, for the first time in the history of our species, critical decisions that affect lives of many are made by non-human beings, that is, by intelligence simulations. 

One of the latest AI-powered chatbots that has taken the world by storm is Open AI’s ChatGPT. On the one hand, ChatGPT has a potential to revolutionise ways how we work as it can: 

  • Provide us with valuable data and insights 
  • Speed up processes and increase work efficiency 
  • Provide us with more time to focus on critical thinking and creative work 

On the other hand, it also exposes a dark side — manipulation of human behaviour leading to algorithmic decision-making. When interacting with such a tool, the response we get is in a form of an opinion – something people usually won’t question when it comes from AI system. The confidence of content generation, the writing style, language structure that is very human-like, are all contributing to the feeling that there is a human on the other side of the table, not AI. Consequently, many won’t check the facts behind the content they receive.  

Academicians are the ones particularly concerned about the impact of this AI tool. Will this model positively disrupt the workplace and be a great model for education system? Or will it have devastating effects on decision-making skills causing alarming consequences? It makes us think how reliant we are on technology.  

Let’s just go back a few decades ago when Google Search was invented. It revolutionized the ways we access information, while simultaneously having a significant impact on our mental and cognitive capabilities. But Google Search is a double-edged sword. Increasingly relying on search engines to provide us with accurate information may lead to negative impact on how we perceive things.  

For instance, a study published in the Science Journal, Computers in Human Behavior, found that frequent use of search engines was associated with decreased critical thinking skills, decreased ability to retain information, and increased dependence on search engines for information. Another study published in the journal Science Direct found that the use of search engines for information was associated with decreased creativity and problem-solving skills.  

Now imagine the impact AI-powered tools like ChatGPT can have on our problem-solving skills, creativity and the way we make decisions? While Google Search offers multi-part queries which are then forcing searchers to ask follow-up questions, many users of ChatGPT have cited the experience of receiving a single, definitive response to their query as preferable when compared with the experience of searching for the information from multiple results.  

Critical Time for Critical Thinking: Is AI Making Us Dumb? HTEC Group

Educators around the globe are concerned that ChatGPT could completely replace them. Based on the findings of the Copyleaks platform, there is 108.5% increase in high school students using AI-generated content tools month over month since January this year. New York City Department of Education has already banned the use of ChatGPT on all school devices and networks. As AI guru Andrew Ng stated, “I wish schools could make homework so joyful that students want to do it themselves, rather than let ChatGPT have all the fun.” 

We cannot be certain about the technology. AI doesn’t know if something is true or false, right or wrong. Still, it will confidently present the information as accurate because predominant presence of training examples enables it to generate such conclusion. Humans tend to overstate the benefits and potential of technology while overlooking not only the risks inherent in its development and implementation but also its vulnerabilities.  

We are told that AI will be critical for equitable distribution of food, diagnosing certain diseases, fighting against climate change causing us to become too dependent on algorithms, gradually entering algorithmic society, where our opinions and attitudes are heavily impacted by the data produced by a machine.  

Take for example, Facebook’s FaceAPP which enabled us to see how we might look when we are older or when we apply beatifying filters to our photos. What was supposed to be an entertaining app, turned out to be a cold shower for reflecting well-known and deeply integrated society biases: What are the society’s norms of being beautiful? How are people aging? According to this app’s behaviour, the answer comes down to being white! This is a perfect example of how AI can easily take advantage of the society whose critical thinking has been degraded to their lowest levels by media frameworks. 

The experiment   

To explore the potential biases AI can develop, we decided to run our own little experiment, asking Midjourney, an artificial intelligence program and service that generates images from natural language descriptions called “prompts”, to generate images of some most common professions including a kindergarten teacher, a CEO, and a personal assistant. The images generated reflect the stereotypes where CEO is seen as a handsome middle-aged man in a suit and a tie, a kindergarten teacher is a middle-aged woman with a pleasant look, and personal assistant is a young attractive and sexually appealing woman in her thirties. All of them are white. 

Kindergarten teacher  

Critical Time for Critical Thinking: Is AI Making Us Dumb? HTEC Group

CEO

Critical Time for Critical Thinking: Is AI Making Us Dumb? HTEC Group

Personal Assistant 

Critical Time for Critical Thinking: Is AI Making Us Dumb? HTEC Group

This might seem benign but think of it from a different perspective: the usual way an average user would be using these tools is to get a solution or response to a question from a broader context. Rarely would someone want to see what kind of image would be generated for a college professor for example; however, someone may need an illustration of a classroom discussion. When we prompted Midjourney with a request to generate a “heated discussion during math class at Harvard”, the results were the following: 

A classroom discussion  

Critical Time for Critical Thinking: Is AI Making Us Dumb? HTEC Group
Critical Time for Critical Thinking: Is AI Making Us Dumb? HTEC Group

We ran another experiment with ChatGPT. This time, we asked it to tell a story about the conversation between parents and kindergarten teacher while parents are picking up their kid. We asked it to be specific when describing characters.  

Critical Time for Critical Thinking: Is AI Making Us Dumb? HTEC Group

The concerning part of the illustrated behaviours is the integrated bias that is discreetly been placed in front of an end-user. One might not be focused on the gender of a college professor, or gender of a kindergarten teacher, or at the fact that the illustration from Midjourney indicates only male students in a class, or even the fact the young couple in a ChatGPT story is straight. We may not notice that consciously. However, our brain is constantly on alert, seeking to find patterns in our surroundings — it will consequently identify those subtle connections and store these associations. This is how the unconscious bias is been reinforced with the help of AI.  

AI internalized already integrated, unconscious bias — including gender bias. It turns out that AI is a mirror of ourselves. 

To make things even worse, it seems that tech giants like Google are ignoring the harms and potential risks these AI-powered tools can cause to the society. Google’s decision to push out pioneering AI researcher and ethicist Timnit Gebru caused a lot of controversy. Namely, Gebru warned that unchecked AI databases could create bias that can become immense without effective regulation or oversight. Harvard Business School professor Tsedal Neeley, who was referring to the Gebru case in his recent case study, wrote, “We now see, once again, that she is ahead of everyone. Not only does she have a deep understanding of the underlying technology that powers these systems, she is ringing the alarm: You have to slow down to ensure that the data that these systems are trained on aren’t inaccurate or biased.” 

On top of this, there has been much talk about the recent Microsoft’s decision to lay off its entire ethics and society team within the AI association as a part of the resent layoffs. Verge reports that this decision left Microsoft without a dedicated team to work on ensuring that the company’s responsible AI principles are actually reflected in the design of their products. According to one employee, “the move leaves a foundational gap on the holistic design of AI products.” Most members of the team were transferred elsewhere within Microsoft. Afterward, remaining ethics and society team members said that the smaller crew made it difficult to implement their ambitious plans. 

Sam Altman, who developed the controversial consumer-facing artificial intelligence app ChatGPT , also warns of Artificial Intelligence, stressing that regulators and society need to be involved with the technology to prevent potentially negative consequences for humanity. 

In a recent post by the Guardian, he explains that “The thing that I try to caution people the most is what we call the ‘hallucinations problem’ …The model will confidently state things as if they were facts that are entirely made up.” He also adds that “The right way to think of the models that we create is a reasoning engine, not a fact database, adding that “that’s not really what’s special about them – what we want them to do is something closer to the ability to reason, not to memorize.”  

But he also said that, despite dangers, it could also be “the greatest technology humanity has yet developed”. 

Myths of AI 

According to Professor Noel Sharkey’s theory on Automation Bias, humans have a tendency to unquestioningly accept the analysis and judgements made by AI because they think they’re more reliable and effective than our own. But the bigger problem is that our willingness to surrender our privacy and decision-making capacity will lead to technological revolution which will serve as a solution to different critical world problems including climate change,  

But we forget that one of the many inherent risks of these smart applications is that they have the power not only to shape reality but also to alter our perception of it. Why is this the case? The theoretical explanation why we decide to use technology that will completely disrupt our privacy is based on three myths.  

  • Machines can adopt ethical-moral behaviours — This is not true as machines are capable of neither ethics nor intuition. The furthest it can go is to have the ethics of the person who coded it.  
  • AI can make decisions more effectively, more equitably and more justly than a human — Again, this is far from the truth. AI can only emulate the ethical system of its creators. 
  • Artificial intelligence is more reliable than human intelligence — While this statement could be accepted in particular cases, it could never be accepted in general terms. 

AI is a powerful tool, but it comes with a great ethical responsibility

What do we do about biases in AI? 

How can companies struggle the threats of AI and ensure they are building AI-powered products and solutions that will lead the way on bias and fairness? I see six essential steps: 

  • Stay up to-date on this progressive field of research 
  • Establish responsible processes that can reduce or eliminate bias  
  • Hire a team of AI experts and Data Scientists with the deep understanding of Explainable AI (HTEC Group has been working with different global organizations on identifying ways of minimizing bias and discrimination and maximizing fairness in their models.) 
  • Know their data and make sure they integrate governance into every step of AI from start to finish 
  • Act responsibly to make it responsible 

AI, as any other technology, should be used to help society progress, and be used for good. And we can already witness advancement in this area. 

The voices of those who share this same view are strong and are dominating the AI landscape. Margareth Mitchell, former AI Researcher from Google and co-author of the controversial paper that eventually got her fired not long after Timnit Gebru, has embarked on a new mission soon after leaving Google. She is now Chief Ethics Scientist at Hugging Face and is deeply committed to researching AI bias, developing diversity and inclusion initiatives, helping build templates for models that detail potential biases and, finally, spelling out sources of data. Former OpenAI’s employees started their own generative AI company in 2021, Anthropic, with a mission to build safe, reliable, and trustworthy AI solutions. Sam Altman, OpenAI’s CEO, has also been very vocal about the possibility of large-scale disinformation and malicious usage of GPT models once they made their entrance to consumer market. Advances in development of GPT are notable and GPT-4 already shows much more reliable behaviour than the previous version.  

By creating synergy between true voices of change and technological breakthroughs, AI can contribute to creating brighter future.  

How Can HTEC Help Build Responsible AI? 

 By creating synergy between true voices of change and technological breakthroughs, AI can contribute to creating brighter future. 

HTEC Group has been working with different global organizations on identifying ways of minimizing bias and discrimination and maximizing fairness in their models. A significant group of the projects we have been working on are providing solutions based on Natural Language Processing. Our team of data scientists have gained experience working on a variety of use cases including keyword extraction, keyword phrases extraction, document similarities, sentiment analysis, semantically based search and match, content recommendations, profanity detection.  

To learn more about how we can help companies scale AI with confidence, read Ethics in AI: Is AI Really One to Blame?

Reach out to us to our data science experts to learn how we can lead you on your mission-critical journey to Responsible AI.   


Author