We Don’t Need an “AI Policy”

Schools that maintain a focus on building a culture of academic integrity through knowing our learners, iterating and evolving will be better-adapted to the opportunities and challenges of AI in education.

The ripples of AI tools have been present in the waters of education for the last year or more, but the massive rock-splash of OpenAI’s ChatGPT release at the end of November 2022 have really rocked the boats of many institutions. Edu-Twitter (and LinkedIN, and blogs and all that), have been aflutter with wildly different responses, news stories and reactions, ranging from ‘ban it‘ to ‘disrupt everything’. We are in an interesting – some say exciting, others frightening – period of adaptation. As the waters calm, we can hold the rudder and think in considered ways. This might be the change that some institutions, resistant to updating practices, might need to set a new course.

Time for a little fun – here’s an AI synthesis from ElevenLabs of my voice reading that first part. Makes me sound a bit posher than I really am…

There are some fabulous examples of educators leaning-in to the tools with their learners, new apps for education that can support teaching (and make teachers’ loads more efficient), and discussions all over the place about whether or not schools need a new “AI Policy” to deal with it all.

We don’t need an AI Policy.

Where in the school’s existing – or planned – policy documents and guidance do the challenges and opportunities find a natural home?

Some examples might be Academic Integrity, Assessment and Responsible/Acceptable Use Policies. Where AI tools are getting more powerful, they are still just another set of tech tools, and we’ve been wrestling with this for decades: calculators, Google, social media and more have sparked their own excitements in their emergence.

An example here, from the IB, on Academic Integrity. With a statement such as this, is there really any need for a separate document on AI, or even explicit reference to AI?

“Academic integrity is a principle in education and a choice to act in a responsible way so others can trust us. It means conducting all aspects of your academic life in a responsible and ethical manner. The IB expects students to produce genuine and authentic pieces of work, that represent their own abilities.” IB. 2022.

Edit/Update, Jan 2, 2023: Video Response to ChatGPT from Dr. Matt Glanville, Head of Assessment Principles and Practice, International Baccalaureate. Shared as part of a symposium on AI from Frankfurt International School.

Update, 27 Feb 2023: Official IB response, on the IB Blog, by Dr. Matt Glanville.

When adapting policies as part of your school’s ecosystem, think about:

  • Are these policies aligned, complementary and in a useful format?
  • Do the words of the policy reflect the core values and beliefs on learning at our school?
  • Do our actions in implementing policy align with our stated beliefs?
  • When new tools emerge, and if/when they are misused, are we in a position to respond with integrity and dignity?

So where is the ‘real work’?

Even in the most future-thinking schools, people might feel a sense of unease at how quickly these powerful tools have affected student outputs. It’s OK to think that way, as part of the journey forwards. We all value integrity, and no-one wants to be caught in a position where our learners’ work is inauthentic. What can we do about it?

  • Evaluating and updating the existing policy ecosystem to ensure alignment and adaptability.
  • Education.
  • Modeling and sharing appropriate, even inspiring, use of new technologies to improve learning.
  • Nurturing a culture and practices of learning that put humans first, making thinking, learning and assessment more visible, iterative and collaborative.
  • Supporting teachers in finding and using AI tools that can help them in their own roles, and demystify the technology with guidance and examples.

When working on assessment, particularly with high stakes, it is more important than ever to erase the invisible middle between ‘set-and-get’. Challenges will appear, and problems will arise but they might likely be less critical in classroom and school cultures that take a coaching and mentoring approach to ideation, drafting, feedback and submission.

Create a culture of academic integrity and powerful learning.

Where do we really want to invest our effort: on creating new policies to catch ‘cheats’, or in building positive school learning culture? Which approaches will help us in our roles of creating future-ready learners who can navigate the world with adaptability and integrity?

An interesting question that quickly arises in conversation, is ‘how do we cite the work produced by AI tools?’, and this is one that librarians and agencies will surely be working on. Has it been used as a search engine, a research assistant, a co-author or something else? How much original thought is in the work? Has the tool made the knowledge more accessible to students through summaries, translations or other adaptive means?

It makes me wonder if we should start including a ‘tech tools and how I used them’ section in assessments, or as part of research workbooks/process journals. Will the tools and a quest for integrity make information and media literacy skills more explicit?

Adapting to an AI Future

In December, I shared an (If You) USEME-AI framework to guide discussions and actions towards adapting to the new reality, and the most recent iteration is posted below. Since December, there have been loads of great articles and ideas shared across the internet, and here are some that have resonated with me:

Whatever happens next, please always remember to think about integrity in learning and the ethics of technological advancement. Model productive relationships with students and tech so that they can grow into a world that is safe, hopeful, inclusive and led with dignity.

Linked posts & resources:

(If You) USEME-AI is licensed under Creative Commons, Non-Commercial, Share-Alike, Attribution. For citation, please use: (If You) USEME-AI model for adapting to AI in schools, by Stephen Taylor at the Western Academy of Beijing.

Post header image was generated in MidJourney, by Stephen.


Posted

in

, , ,

by

Tags:

Comments

Thank-you for your comments.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: