What happens when we don’t need to make a choice about using GenAI any more, because it is inextricably embedded in all of the tools we use?

Developments over the last few months, fuelled by the post-ChatGPT gold-rush on AI, a very competitive innovation and technology landscape, the alignment/acquisition of LLM’s with big companies and the emergence of more open-source platforms, mean that AI is literally everywhere now. Including EdTech.

It used to be an active choice to use an AI tool.

Back in the olden days, before ChatGPT, Generative AI (GenAI) capabilities were something a user had to seek out, and to make a choice about using. If someone wanted to write with GPT, they needed to use a specialist tool like Craft, WriteSonic or Peppertype. If they wanted to make images, they had to go find models like MidJourney or Stable Diffusion. There was a moment, or a process, where one would have to think “can I use AI for this?”.

There was intentionality in engaging with GenAI, whatever the purpose.

At the time of writing, it’s six months after ChatGPT was released and threw GenAI into the minds of many; shook up education and created a wild new world of tools and use-cases. When ChatGPT hit the news, there was another way to engage with AI. Image models became more powerful, to the point of photo-realism. But they were still another product, another choice, and the reserve of those who went looking. A recent report by Pew found that although about 58% of US adults had heard of ChatGPT, only 14% had used it, even fewer had found it useful and a generational divide was apparent.

In the meantime, the AI landscape has changed at a pace that might mean those who’ve never tried ChatGPT or other tools might never need to…

Now “there’s an AI for that”

There’s An AI For That currently has over 4,700 tools listed, sortable by task-type, and the list keeps growing. New companies are emerging, big companies are growing and the race is on to dominate niches and markets with AI capabilities.

Education is still adapting. Even if we don’t get from GPT-4 to GPT-5, the implications of GenAI are huge for teaching, learning, assessment, policy and much, much more. We’ve seen applications of AI that can help teachers plan, support students in new ways, assess and personalise learning. Academic Integrity guidelines are shifting. The term CoPilot is popping up everywhere, as we are encouraged to think of AI as our assistant, mentor, thought-partner or co-worker.

The same is true for students, as GenAI is readily accessible to anyone who wants it – as well as those who don’t, and those who don’t know it’s here. Their futures are shifting. But it’s not just their futures, it’s their present.

In 2022, people were impressed by pretty basic image or text generation using AI. Discussions of the impacts of AI on art, artists, writers and academic integrity were bubbling up. We were asking if we were ready for these to be used in education. In 2023, we’re rarely surprised by the latest AI development – whether it is photo-realistic images, deep-fake music, video, 3D world creation, research assistants or medical breakthroughs.

“Huh, cool. Was that AI?”

Are we at the tipping point of invisible ai?

Even despite becoming blasé to the breakthroughs, it has been taking the mindful move of seeking out the AI tool to try the new thing. But is the tipping point of ‘invisible AI’ here already?

GenAI is not just everywhere, is almost inside everything. Last week’s big news of Adobe’s Photoshop incorporating Firefly AI tools into its core suite was the tip of the iceberg. GenAI had already rolled-out by default in Canva. Microsoft Bing and Google Bard have put GPT into major search engines. Grammarly and Snapchat have AI assistants built-in. CoPilots are coming soon for Google Workspace and Microsoft Windows.

With every new embedded AI function comes another, often anthropomorphised name. Where their release whips around social media with GenAI language, it soon becomes an accepted part of the tool, the suite or the workflow. The identity of “AI” fades, yet the utility remains.

We may forget that the button we’re clicking, or the function we’re using, is “AI” at all. It will become invisible and the conscious choice to use GenAI will disappear. To new users of the tools, they might have no idea that the features they’re using are connected to GenAI, and it will be seen as just another icon, pop-up or helpful assistant appearing in your work.

We’ve seen it before, as we learned to search Wikipedia, “just Google it”, or accept spellcheck, predictive text, the use of preference algorithms on Amazon, Spotify or Netflix. Our home assistants, Siri and navigation apps have further embedded machine learning in our normal lives. They were all amazing once, but now we take it all for granted, without asking about how they work, or even making an active decision to use them.

A joke tweet I posted in early January when Microsoft announced AI assistants in core tools.

Image generated in DALL-E 2.

So what does this mean for teaching & learning?

GenAI is no longer just here, it’s inevitable, inextricable and irreversible. In no time at all, we and our learners will be using it without thinking about the fact that we’re using it. The tools are about to become so seamless that it will be easy to forget to wonder how they work and if we should be using them for the task at hand.

The undercover ubiquity of GenAI may do more to change teaching, learning and work than its initial emergence. Workflows will change, and we will not easily be able to go back to the pre-GPT era. As Dr. Sarah Elaine Eaton proposes, we are entering the Postplagiarism era, with GenAI as a co-writer, researcher and coach. It won’t be helpful to suspect learners of ‘cheating’ when their core tools are prioritising GenAI assistance. So we will need to adapt, redesign, relearn and create differently. We will need to help parents and educators make sense of the change, and not just start-all-over, but find valuable use-cases of the tools to save them time and stress as they adapt to the new reality.

I’ve written a bunch on here already about how we might adapt, but now is the time to explore your teaching and learning cultures. To rethink learning design and assessment and to focus on what makes schooling really worthwhile, joyful and human in ways that will support our students’ growth and adaptability.

What worries me more?

Guidelines and policies have emerged on the use of AI in education, yet from the user perspective, they still consider it a (near)future consideration, a choice or something to plan for. There is little in there about what happens when it becomes unavoidable.

My biggest worries of this tipping point are that critical issues in AI development will fade into the background. Will discussions of ethical, equitable, sustainable, unbiased development of AI get left behind when there is no friction in the choice to use GenAI?

I worry for the learners who will get caught up in the period of adaptation: those who might make unwitting mistakes that get them in trouble with systems and people slowly adapting to GenAI. They’re not in a rush to use AI to ‘cheat’, they’re here to learn and find meaning in their work.

I am worried about the infinite power of GenAI to create harm through fake news, fake images, cybersecurity and misuse of data. It has just become even harder to tell fact from fiction, and we will need to develop critical AI and media literacies as protective factors… but it is all so realistic seeming that traditional information and media literacy skills might fall short. Current best guidance, such as this from MIT, is helpful, but for how much longer? Will we have the time or capacity to analyse everything we see and read as the outputs get ever-more convincing?

I also worry for those whose careers and current employment seem uncertain as the AI proliferation threatens their roles, before they have time to retrain, adapt or refocus. I know this is something that worries our students and their parents.

But I’m hopeful too

I’m hopeful that, once we adapt to the embedded GenAI reality, global conversations about what matters in teaching and learning, what matters in life and success, and what is needed for living purposeful, fulfilled or just plain happy lives are aligned with each other and result in meaningful change and action in education. [Update – The Optimalist has a good post on what this might mean for parents].

I’m hopeful that, beyond the big-tech companies, inclusive, open-source communities such as HuggingFace will harness AI for good and help a shift of power in learning and action in meaningful ways, that real expertise will shine through and that we’ll continue to centre safe, ethical and equitable development of AI, and that we will develop broader understandings of what Prof. Maha Bali describes as critical AI literacies.

We thought the pandemic provided the activation energy to shift the system – and it proved we could – but is GenAI going to be the catalyst to sustain real and lasting change? When the amazing becomes invisible, when we adapt and thrive, will we be able to focus on what makes us more human, caring and tackle the big problems?


Related Posts & Resources

Header image generated in Adobe’s Firefly, with the prompt: A closeup photo of a group of diverse young students working around a laptop in a futuristic classroom, natural lighting, photorealistic. Aspect ratio 16:9.

Stephen Avatar

Published by

Categories: ,

One response to “Have We Reached The “Invisible AI” Tipping Point?”

  1. (If You) USEME-AI (1.3) – Wayfinder Learning Lab – Stephen Taylor Avatar

    […] have passed the tipping point of invisible AI; its tendrils infiltrating every app and tool we use. There are many more […]

Thank-you for your comments.