artificial intelligence Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/artificial-intelligence/ Teach, learn and make with Raspberry Pi Thu, 29 May 2025 11:00:24 +0000 en-GB hourly 1 https://wordpress.org/?v=6.8.1 https://www.raspberrypi.org/app/uploads/2020/06/cropped-raspberrry_pi_logo-100x100.png artificial intelligence Archives - Raspberry Pi Foundation https://www.raspberrypi.org/blog/tag/artificial-intelligence/ 32 32 Bridging the divide: Connecting global communities with Experience AI https://www.raspberrypi.org/blog/bridging-the-divide-connecting-global-communities-with-experience-ai/ https://www.raspberrypi.org/blog/bridging-the-divide-connecting-global-communities-with-experience-ai/#respond Thu, 29 May 2025 11:00:23 +0000 https://www.raspberrypi.org/?p=90280 From smart devices to workplace tools, AI is becoming part of everyday life and a major part of how people are thinking about the future — raising big questions about access, skills, and readiness. As governments around the world create AI strategies for the decade ahead, many are seeing an urgent need to address the…

The post Bridging the divide: Connecting global communities with Experience AI appeared first on Raspberry Pi Foundation.

]]>
From smart devices to workplace tools, AI is becoming part of everyday life and a major part of how people are thinking about the future — raising big questions about access, skills, and readiness.

As governments around the world create AI strategies for the decade ahead, many are seeing an urgent need to address the large gap between how AI tools are already impacting jobs and people’s lives, and making sure young people have the chance to gain the skills and knowledge to keep up with this rapid pace of technological change. This gap is larger still when it comes to opportunities for educationally underserved communities.

A group of students and educators holding an Experience AI poster.

That’s why we’re excited to share how Experience AI, our AI literacy programme, is helping organisations around the world create these much-needed opportunities for young people.

The value of a global network

Experience AI was co-developed in 2022 by us and industry experts at Google DeepMind with a clear mission: to equip teachers with free, accessible, easy-to-use classroom resources that build AI literacy from the ground up. The programme offers a suite of materials to help students understand real-world applications of AI, the basics of machine learning, and the ethical considerations around these technologies.

A picture of Philip Colligan delivering a talk.

In 2023, we started building an international Experience AI network by collaborating with a group of our existing educational partners. We saw a huge amount of interest and received very positive feedback, and through our partnerships we reached an estimated one million young people. In late 2024, with support from Google.org, we tripled the size of our Experience AI partner network to 21, with new organisations joining from across Europe, the Middle East, and Africa. In this way, we aim to reach an additional 2.3 million young people by December 2026, helping them to gain the knowledge and skills to confidently engage with AI in an ever-changing world.

Each partner in the Experience AI network is a unique educational organisation looking to create lasting social change. Through their local knowledge and networks, we can present Experience AI to educators and students in a way that is engaging and relevant for local communities. 

A group of students participating in an Experience AI session.

Partners help us to adapt and translate our resources, all while making sure that the core pedagogy and design principles of Experience AI are preserved. Just as importantly, these organisations train thousands of teachers on how to use the materials, providing educators with free support. With their work, they reach communities that otherwise may have never had the opportunity to learn about AI.

We asked some of our partners to share their insights on the impact Experience AI is having on the teachers and young people in their communities.

Building communities

The Latvian Safer Internet Centre (LSIC), an initiative of our partner, the Latvian Internet Association (LIA), is dedicated to helping young people protect themselves online, and to preparing them for a fast-changing digital economy. As an Experience AI partner, they aim to train 850 teachers and support 43,000 students to build a strong foundation in AI literacy through the programme.

“We hope to spark a cultural shift in how AI is […] taught in Latvian schools. Our goal is for AI literacy to become a natural part of digital competence education, not an optional extra.”

A woman is delivering a presentation about Experience AI.

Based in Riga, the team is travelling to 18 different regions across Latvia to bring in-person professional development to teachers, including those in rural communities far from major cities. By meeting teachers where they are, the LIA are creating invaluable networks for learning and support between communities. Through hands-on training, they are also supporting teachers to bring Experience AI into their own classroom, creating examples which are suited for their learners.

“We chose an in-person training model because it fosters a more collaborative and engaging environment, especially for teachers who are new to AI. Many educators, particularly those who are less confident with digital tools, benefit from direct interaction, real-time discussions, and the chance to ask questions in a supportive setting.” 

As an Experience AI partner, the Latvian Internet Association is not just delivering content but working to strengthen digital competency across the country and ensure that no teacher or student is left behind in Latvia’s AI journey. 

One teacher shares: “The classroom training was truly valuable: it gave us the chance to exchange ideas and reflect on our diverse experiences. Hearing different perspectives was enriching, and I’m glad we’re shaping the future of our schools together.”

“AI is for everyone”

EdCamp Ukraine’s mission is to unite educators and help them to grow. Operating from their main base in Kharkiv, near the Eastern border and the frontline of the ongoing war in Ukraine, they see AI as both a tool for new technological breakthroughs and as something that can help build a fairer, more efficient, and resilient society.

“We firmly believe AI should not only be an object of study — it must become a tool for amplifying human potential. AI should also not be a privilege, but a resource for everyone. We believe the Experience AI programme can truly transform education from the bottom up.”

A man is delivering a presentation about Experience AI to a group of educators.

Within their community of 50,000 teachers, EdCamp Ukraine ensures that every educator, regardless of their living conditions or where they work, can access high-quality, relevant, and accessible support. For the organisation, the ongoing situation in Ukraine means being flexible with planning, preparing for a range of different outcomes, and being ready to pivot delivery to different locations or to an online setting when needed. These same considerations apply to EdCamp Ukraine’s teacher community, who need to be ready to adapt their lessons for any scenario.

“Recognising these war-related challenges helps us see the bigger picture and always have contingency plans in place. We think ahead and develop flexible scenarios.”

Two educators looking at a laptop screen.

This year, the team piloted Experience AI through their community of trainers, who, when they’re not training, are busy teaching in the classroom. Teacher Yuliia shared how her students valued the opportunity to be creators, rather than just users of technology:

“One student, who is an active AI user, kept silent during the lesson. I thought he wasn’t interested, but during the reflection he shared a lot of positive feedback and expressed his gratitude. Other students said it was important that they weren’t just told about AI — they were using it, creating images, and working with apps.”

A group of educators looking at a laptop screen.

EdCamp Ukraine plans to roll out training for Ukrainian teachers this autumn, reaching 2,000 teachers and 40,000 young people by the end of next year. 

More countries, more classrooms 

Two new partners in Nigeria are about to join the Experience AI network, and there are many more organisations in more countries coming soon. As our partner network continues to grow, we are excited to reach more communities and give more young people around the world the chance to build AI literacy skills and knowledge. 

You can find out more about Experience AI on the website. If your organisation is interested in partnering with us to deliver Experience AI, please register your interest and we will let you know about opportunities to work with us.

The post Bridging the divide: Connecting global communities with Experience AI appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/bridging-the-divide-connecting-global-communities-with-experience-ai/feed/ 0
Insights from a teacher trainer: Schools are ready to engage in AI — what they need is support https://www.raspberrypi.org/blog/insights-from-a-teacher-trainer-experience-ai/ https://www.raspberrypi.org/blog/insights-from-a-teacher-trainer-experience-ai/#respond Thu, 15 May 2025 10:15:33 +0000 https://www.raspberrypi.org/?p=90140 Today’s blog post is written by Dan Shilling, Programmes Manager at Parent Zone, one of our global partners for Experience AI. “Educators have been struggling to find resources and support to teach young people about AI.” This is something I’ve heard a lot when delivering Experience AI teacher training through Parent Zone’s partnership with the…

The post Insights from a teacher trainer: Schools are ready to engage in AI — what they need is support appeared first on Raspberry Pi Foundation.

]]>
Today’s blog post is written by Dan Shilling, Programmes Manager at Parent Zone, one of our global partners for Experience AI.

“Educators have been struggling to find resources and support to teach young people about AI.”

This is something I’ve heard a lot when delivering Experience AI teacher training through Parent Zone’s partnership with the Raspberry Pi Foundation. 

An educator is delivering a presentation during a workshop.

Our partnership with the Raspberry Pi Foundation

Experience AI is an artificial intelligence (AI) literacy programme, co-developed by the Raspberry Pi Foundation and Google DeepMind, that teaches students aged 11 to 14 about AI and machine learning. Thanks to funding from Google.org, Parent Zone has partnered with the Raspberry Pi Foundation to provide free training to UK educators, equipping them with the skills they need to effectively deliver the programme in their settings.

The Experience AI resources help educators, including those from non-technical backgrounds, to deliver impactful lessons on AI and machine learning. Lesson resources span technical elements (e.g. data-driven models, bias) and practical elements (e.g. careers, safety).

Our face-to-face and virtual training sessions show teachers how to use the programme resources, as well as helping them feel more confident in the subject matter.  

The sessions also give me an opportunity to hear from teachers about how AI is being used and taught in classrooms, and the opportunities and challenges it’s creating.

A group of educators at a workshop.

Curiosity and experimentation

AI has a major presence in many schools now. 

Teachers tell me they’re seeing students use AI to support their homework. One teacher spoke about a student using a chatbot to help break down a maths problem, describing it like “having a tutor at home.”

Teachers are also using AI themselves to assist in their work — for example, to plan lessons, generate activities, and get ideas on how to explain complex topics more clearly. 

Openness to experimentation is clearly there. 

Educators at a workshop.

Addressing concerns

For all the benefits of AI, teachers also have concerns about it. 

Some have told me their students have no idea how easily these tools can be used to mislead or manipulate, through disinformation and deepfakes, for example. 

This is why Experience AI resources are meeting educator needs. Not only do they explain how AI and machine learning actually work, but they also address many pressing concerns around AI, from responsible usage and media literacy, to how data bias can affect the final output.   

Positive changes

In all the workshops, what stands out to me most is how ready teachers are to engage. They want to understand more. They want to help their students make sense of AI, and use it positively. 

They’re grateful for practical, grounded training and support that doesn’t assume they’ve all got computer science degrees. After one of our sessions, a teacher said:

“The better we educate ourselves, the better we’re able to help young people. It’s important because it’s affecting their day-to-day lives. We can help them navigate AI platforms, but in a safe way.”

Educators at a workshop.

Join a network of AI-ready educators

If you’re a UK secondary school teacher, you can sign up for free training from Parent Zone, with dates available until November 2025. You can choose from:

For more information about Experience AI, visit our website.

The post Insights from a teacher trainer: Schools are ready to engage in AI — what they need is support appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/insights-from-a-teacher-trainer-experience-ai/feed/ 0
Teaching about AI in schools: Take part in our Research and Educator Community Symposium https://www.raspberrypi.org/blog/teaching-about-ai-in-schools-research-and-educator-community-symposium/ Thu, 31 Oct 2024 10:51:41 +0000 https://www.raspberrypi.org/?p=88786 Worldwide, the use of generative AI systems and related technologies is transforming our lives. From marketing and social media to education and industry, these technologies are being used everywhere, even if it isn’t obvious. Yet, despite the growing availability and use of generative AI tools, governments are still working out how and when to regulate…

The post Teaching about AI in schools: Take part in our Research and Educator Community Symposium appeared first on Raspberry Pi Foundation.

]]>
Worldwide, the use of generative AI systems and related technologies is transforming our lives. From marketing and social media to education and industry, these technologies are being used everywhere, even if it isn’t obvious. Yet, despite the growing availability and use of generative AI tools, governments are still working out how and when to regulate such technologies to ensure they don’t cause unforeseen negative consequences.

How, then, do we equip our young people to deal with the opportunities and challenges that they are faced with from generative AI applications and associated systems? Teaching them about AI technologies seems an important first step. But what should we teach, when, and how?

A teacher aids children in the classroom

Researching AI curriculum design

The researchers at the Raspberry Pi Foundation have been looking at research that will help inform curriculum design and resource development to teach about AI in school. As part of this work, a number of research themes have been established, which we would like to explore with educators at a face-to-face symposium. 

These research themes include the SEAME model, a simple way to analyse learning experiences about AI technology, as well as anthropomorphisation and how this might influence the formation of mental models about AI products. These research themes have become the cornerstone of the Experience AI resources we’ve co-developed with Google DeepMind. We will be using these materials to exemplify how the research themes can be used in practice as we review the recently published UNESCO AI competencies.

A group of educators at a workshop.

Most importantly, we will also review how we can help teachers and learners move from a rule-based view of problem solving to a data-driven view, from computational thinking 1.0 to computational thinking 2.0.

A call for teacher input on the AI curriculum

Over ten years ago, teachers in England experienced a large-scale change in what they needed to teach in computing lessons when programming was more formally added to the curriculum. As we enter a similar period of change — this time to introduce teaching about AI technologies — we want to hear from teachers as we collectively start to rethink our subject and curricula. 

We think it is imperative that educators’ voices are heard as we reimagine computer science and add data-driven technologies into an already densely packed learning context. 

Educators at a workshop.

Join our Research and Educator Community Symposium

On Saturday, 1 February 2025, we are running a Research and Educator Community Symposium in collaboration with the Raspberry Pi Computing Education Research Centre

In this symposium, we will bring together UK educators and researchers to review research themes, competency frameworks, and early international AI curricula and to reflect on how to advance approaches to teaching about AI. This will be a practical day of collaboration to produce suggested key concepts and pedagogical approaches and highlight research needs. 

Educators and researchers at an event.

This symposium focuses on teaching about AI technologies, so we will not be looking at which AI tools might be used in general teaching and learning or how they may change teacher productivity. 

It is vitally important for young people to learn how to use AI technologies in their daily lives so they can become discerning consumers of AI applications. But how should we teach them? Please help us start to consider the best approach by signing up for our Research and Educator Community Symposium by 9 December 2024.

Information at a glance

When:  Saturday, 1 February 2025 (10am to 5pm) 

Where: Raspberry Pi Foundation Offices, Cambridge

Who: If you have started teaching about AI, are creating related resources, are providing professional development about AI technologies, or if you are planning to do so, please apply to attend our symposium. Travel funding is available for teachers in England.

Please note we expect to be oversubscribed, so book early and tell us about why you are interested in taking part. We will notify all applicants of the outcome of their application by 11 December.

The post Teaching about AI in schools: Take part in our Research and Educator Community Symposium appeared first on Raspberry Pi Foundation.

]]>
Introducing new artificial intelligence and machine learning projects for Code Clubs https://www.raspberrypi.org/blog/artificial-intelligence-projects-for-kids/ https://www.raspberrypi.org/blog/artificial-intelligence-projects-for-kids/#comments Tue, 29 Oct 2024 09:36:00 +0000 https://www.raspberrypi.org/?p=88639 We’re pleased to share a new collection of Code Club projects designed to introduce creators to the fascinating world of artificial intelligence (AI) and machine learning (ML). These projects bring the latest technology to your Code Club in fun and inspiring ways, making AI and ML engaging and accessible for young people. We’d like to…

The post Introducing new artificial intelligence and machine learning projects for Code Clubs appeared first on Raspberry Pi Foundation.

]]>
We’re pleased to share a new collection of Code Club projects designed to introduce creators to the fascinating world of artificial intelligence (AI) and machine learning (ML). These projects bring the latest technology to your Code Club in fun and inspiring ways, making AI and ML engaging and accessible for young people. We’d like to thank Amazon Future Engineer for supporting the development of this collection.

A man on a blue background, with question marks over his head, surrounded by various objects and animals, such as apples, planets, mice, a dinosaur and a shark.

The value of learning about AI and ML

By engaging with AI and ML at a young age, creators gain a clearer understanding of the capabilities and limitations of these technologies, helping them to challenge misconceptions. This early exposure also builds foundational skills that are increasingly important in various fields, preparing creators for future educational and career opportunities. Additionally, as AI and ML become more integrated into educational standards, having a strong base in these concepts will make it easier for creators to grasp more advanced topics later on.

What’s included in this collection

We’re excited to offer a range of AI and ML projects that feature both video tutorials and step-by-step written guides. The video tutorials are designed to guide creators through each activity at their own pace and are captioned to improve accessibility. The step-by-step written guides support creators who prefer learning through reading. 

The projects are crafted to be flexible and engaging. The main part of each project can be completed in just a few minutes, leaving lots of time for customisation and exploration. This setup allows for short, enjoyable sessions that can easily be incorporated into Code Club activities.

The collection is organised into two distinct paths, each offering a unique approach to learning about AI and ML:

Machine learning with Scratch introduces foundational concepts of ML through creative and interactive projects. Creators will train models to recognise patterns and make predictions, and explore how these models can be improved with additional data.

The AI Toolkit introduces various AI applications and technologies through hands-on projects using different platforms and tools. Creators will work with voice recognition, facial recognition, and other AI technologies, gaining a broad understanding of how AI can be applied in different contexts.

Inclusivity is a key aspect of this collection. The projects cater to various skill levels and are offered alongside an unplugged activity, ensuring that everyone can participate, regardless of available resources. Creators will also have the opportunity to stretch themselves — they can explore advanced technologies like Adobe Firefly and practical tools for managing Ollama and Stable Diffusion models on Raspberry Pi computers.

Project examples

A piece of cheese is displayed on a screen. There are multiple mice around the screen.

One of the highlights of our new collection is Chomp the cheese, which uses Scratch Lab’s experimental face recognition technology to create a game students can play with their mouth! This project offers a playful introduction to facial recognition while keeping the experience interactive and fun. 

A big orange fish on a dark blue background, with green leaves surrounding the fish.

Fish food uses Machine Learning for Kids, with creators training a model to control a fish using voice commands.

An illustration of a pink brain is displayed on a screen. There are two hands next to the screen playing the 'Rock paper scissors' game.

In Teach a machine, creators train a computer to recognise different objects such as fingers or food items. This project introduces classification in a straightforward way using the Teachable Machine platform, making the concept easy to grasp. 

Two men on a blue background, surrounded by question marks, a big green apple and a red tomato.

Apple vs tomato also uses Teachable Machine, but this time creators are challenged to train a model to differentiate between apples and tomatoes. Initially, the model exhibits bias due to limited data, prompting discussions on the importance of data diversity and ethical AI practices. 

Three people on a light blue background, surrounded by music notes and a microbit.

Dance detector allows creators to use accelerometer data from a micro:bit to train a model to recognise dance moves like Floss or Disco. This project combines physical computing with AI, helping creators explore movement recognition technology they may have experienced in familiar contexts such as video games. 

A green dinosaur in a forest is being observed by a person hiding in the bush holding the binoculars.

Dinosaur decision tree is an unplugged activity where creators use a paper-based branching chart to classify different types of dinosaurs. This hands-on project introduces the concept of decision-making structures, where each branch of the chart represents a choice or question leading to a different outcome. By constructing their own decision tree, creators gain a tactile understanding of how these models are used in ML to analyse data and make predictions. 

These AI projects are designed to support young people to get hands-on with AI technologies in Code Clubs and other non-formal learning environments. Creators can also enter one of their projects into Coolest Projects by taking a short video showing their project and any code used to make it. Their creation will then be showcased in the online gallery for people all over the world to see.

The post Introducing new artificial intelligence and machine learning projects for Code Clubs appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/artificial-intelligence-projects-for-kids/feed/ 1
Hello World #25 out now: Generative AI https://www.raspberrypi.org/blog/hello-world-25-out-now-generative-ai/ Mon, 23 Sep 2024 11:00:11 +0000 https://www.raspberrypi.org/?p=88432 Since they became publicly available at the end of 2022, generative AI tools have been hotly discussed by educators: what role should these tools for generating human-seeming text, images, and other media play in teaching and learning? Two years later, the one thing most people agree on is that, like it or not, generative AI…

The post Hello World #25 out now: Generative AI appeared first on Raspberry Pi Foundation.

]]>
Since they became publicly available at the end of 2022, generative AI tools have been hotly discussed by educators: what role should these tools for generating human-seeming text, images, and other media play in teaching and learning?

Two years later, the one thing most people agree on is that, like it or not, generative AI is here to stay. And as a computing educator, you probably have your learners and colleagues looking to you for guidance about this technology. We’re sharing how educators like you are approaching generative AI in issue 25 of Hello World, out today for free.

Digital image of a copy of Hello World magazine, issue 25.

Generative AI and teaching

Since our ‘Teaching and AI’ issue a year ago, educators have been making strides grappling with generative AI’s place in their classroom, and with the potential risks to young people. In this issue, you’ll hear from a wide range of educators who are approaching this technology in different ways. 

For example:

  • Laura Ventura from Gwinnett County Public Schools (GCPS) in Georgia, USA shares how the GCPS team has integrated AI throughout their K–12 curriculum
  • Mark Calleja from our team guides you through using the OCEAN prompt process to reliably get the results you want from an LLM 
  • Kip Glazer, principal at Mountain View High School in California, USA shares a framework for AI implementation aimed at school leaders
  • Stefan Seegerer, a researcher and educator in Germany, discusses why unplugged activities help us focus on what’s really important in teaching about AI

This issue also includes practical solutions to problems that are unique to computer science educators:

  • Graham Hastings in the UK shares his solution to tricky crocodile clips when working with micro:bits
  • Riyad Dhuny shares his case study of home-hosting a learning management system with his students in Mauritius

And there is lots more for you to discover in issue 25.

Whether or not you use generative AI as part of your teaching practice, it’s important for you to be aware of AI technologies and how your young people may be interacting with it. In his article “A problem-first approach to the development of AI systems”, Ben Garside from our team affirms that:

“A big part of our job as educators is to help young people navigate the changing world and prepare them for their futures, and education has an essential role to play in helping people understand AI technologies so that they can avoid the dangers.

Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. […]

Our call to action to educators, carers, and parents is to have conversations with your young people about generative AI. Get to know their opinions on it and how they view its role in their lives, and help them to become critical thinkers when interacting with technology.”

Share your thoughts & subscribe to Hello World

Computing teachers are being asked again to teach something that they didn’t study. With generative AI as with all things computing, we want to support your teaching and share your successes. We hope you enjoy this issue of Hello World, and please get in touch with your article ideas or what you would like to see in the magazine.


We’d like to thank Oracle for supporting this issue.

The post Hello World #25 out now: Generative AI appeared first on Raspberry Pi Foundation.

]]>
Impact of Experience AI: Reflections from students and teachers https://www.raspberrypi.org/blog/impact-of-experience-ai-reflections-from-students-and-teachers/ Tue, 17 Sep 2024 08:20:13 +0000 https://www.raspberrypi.org/?p=88341 Students and teachers share their stories about the impact the Experience AI lessons have had in developing their understanding of artificial intelligence. We're now expanding Experience AI for 16 more countries and creating new resources on AI safety, thanks to funding from Google.org.

The post Impact of Experience AI: Reflections from students and teachers appeared first on Raspberry Pi Foundation.

]]>
“I’ve enjoyed actually learning about what AI is and how it works, because before I thought it was just a scary computer that thinks like a human,” a student learning with Experience AI at King Edward’s School, Bath, UK, told us. 

This is the essence of what we aim to do with our Experience AI lessons, which demystify artificial intelligence (AI) and machine learning (ML). Through Experience AI, teachers worldwide are empowered to confidently deliver engaging lessons with a suite of resources that inspire and educate 11- to 14-year-olds about AI and the role it could play in their lives.

“I learned new things and it changed my mindset that AI is going to take over the world.” – Student, Malaysia

Experience AI students in Malaysia
Experience AI students in Malaysia

Developed by us with Google DeepMind, our first set of Experience AI lesson resources was aimed at a UK audience and launched in April 2023. Next we released tailored versions of the resources for 5 other countries, working in close partnership with organisations in Malaysia, Kenya, Canada, Romania, and India. Thanks to new funding from Google.org, we’re now expanding Experience AI for 16 more countries and creating new resources on AI safety, with the aim of providing leading-edge AI education for more than 2 million young people across Europe, the Middle East, and Africa. 

In this blog post, you’ll hear directly from students and teachers about the impact the Experience AI lessons have had so far. 

Case study:  Experience AI in Malaysia

Penang Science Cluster in Malaysia is among the first organisations we’ve partnered with for Experience AI. Speaking to Malaysian students learning with Experience AI, we found that the lessons were often very different from what they had expected. 

Launch of Experience AI in Malaysia
Launch of Experience AI in Malaysia

“I actually thought it was going to be about boring lectures and not much about AI but more on coding, but we actually got to do a lot of hands-on activities, which are pretty fun. I thought AI was just about robots, but after joining this, I found it could be made into chatbots or could be made into personal helpers.” – Student, Malaysia

“Actually, I thought AI was mostly related to robots, so I was expecting to learn more about robots when I came to this programme. It widened my perception on AI.” – Student, Malaysia. 

The Malaysian government actively promotes AI literacy among its citizens, and working with local education authorities, Penang Science Cluster is using Experience AI to train teachers and equip thousands of young people in the state of Penang with the understanding and skills to use AI effectively. 

“We envision a future where AI education is as fundamental as mathematics education, providing students with the tools they need to thrive in an AI-driven world”, says Aimy Lee, Chief Operating Officer at Penang Science Cluster. “The journey of AI exploration in Malaysia has only just begun, and we’re thrilled to play a part in shaping its trajectory.”

Giving non-specialist teachers the confidence to introduce AI to students

Experience AI provides lesson plans, classroom resources, worksheets, hands-on activities, and videos to help teachers introduce a wide range of AI applications and help students understand how they work. The resources are based on research, and because we adapt them to each partner’s country, they are culturally relevant and relatable for students. Any teacher can use the resources in their classroom, whether or not they have a background in computing education. 

“Our Key Stage 3 Computing students now feel immensely more knowledgeable about the importance and place that AI has in their wider lives. These lessons and activities are engaging and accessible to students and educators alike, whatever their specialism may be.” – Dave Cross,  North Liverpool Academy, UK

“The feedback we’ve received from both teachers and learners has been overwhelmingly positive. They consistently rave about how accessible, fun, and hands-on these resources are. What’s more, the materials are so comprehensive that even non-specialists can deliver them with confidence.” – Storm Rae, The National Museum of Computing, UK

Experience AI teacher training in Kenya
Experience AI teacher training in Kenya


“[The lessons] go above and beyond to ensure that students not only grasp the material but also develop a genuine interest and enthusiasm for the subject.” – Teacher, Changamwe Junior School, Mombasa, Kenya

Sparking debates on bias and the limitations of AI

When learners gain an understanding of how AI works, it gives them the confidence to discuss areas where the technology doesn’t work well or its output is incorrect. These classroom debates deepen and consolidate their knowledge, and help them to use AI more critically.

“Students enjoyed the practical aspects of the lessons, like categorising apples and tomatoes. They found it intriguing how AI could sometimes misidentify objects, sparking discussions on its limitations. They also expressed concerns about AI bias, which these lessons helped raise awareness about. I didn’t always have all the answers, but it was clear they were curious about AI’s implications for their future.” – Tracey Mayhead, Arthur Mellows Village College, Peterborough, UK

Experience AI students in UK
Experience AI students in UK

“The lessons that we trialled took some of the ‘magic’ out of AI and started to give the students an understanding that AI is only as good as the data that is used to build it.” – Jacky Green, Waldegrave School, UK 

“I have enjoyed learning about how AI is actually programmed, rather than just hearing about how impactful and great it could be.” – Student, King Edward’s School, Bath, UK 

“It has changed my outlook on AI because now I’ve realised how much AI actually needs human intelligence to be able to do anything.” – Student, Arthur Mellows Village College, Peterborough, UK 

“I didn’t really know what I wanted to do before this but now knowing more about AI, I probably would consider a future career in AI as I find it really interesting and I really liked learning about it.” – Student, Arthur Mellows Village College, Peterborough, UK 

If you’d like to get involved with Experience AI as an educator and use our free lesson resources with your class, you can start by visiting experience-ai.org.

The post Impact of Experience AI: Reflections from students and teachers appeared first on Raspberry Pi Foundation.

]]>
Why we’re taking a problem-first approach to the development of AI systems https://www.raspberrypi.org/blog/why-were-taking-a-problem-first-approach-to-the-development-of-ai-systems/ https://www.raspberrypi.org/blog/why-were-taking-a-problem-first-approach-to-the-development-of-ai-systems/#comments Tue, 06 Aug 2024 11:02:05 +0000 https://www.raspberrypi.org/?p=87923 If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding…

The post Why we’re taking a problem-first approach to the development of AI systems appeared first on Raspberry Pi Foundation.

]]>
If you are into tech, keeping up with the latest updates can be tough, particularly when it comes to artificial intelligence (AI) and generative AI (GenAI). Sometimes I admit to feeling this way myself, however, there was one update recently that really caught my attention. OpenAI launched their latest iteration of ChatGPT, this time adding a female-sounding voice. Their launch video demonstrated the model supporting the presenters with a maths problem and giving advice around presentation techniques, sounding friendly and jovial along the way. 

A finger clicking on an AI app on a phone.

Adding a voice to these AI models was perhaps inevitable as big tech companies try to compete for market share in this space, but it got me thinking, why would they add a voice? Why does the model have to flirt with the presenter? 

Working in the field of AI, I’ve always seen AI as a really powerful problem-solving tool. But with GenAI, I often wonder what problems the creators are trying to solve and how we can help young people understand the tech. 

What problem are we trying to solve with GenAI?

The fact is that I’m really not sure. That’s not to suggest that I think that GenAI hasn’t got its benefits — it does. I’ve seen so many great examples in education alone: teachers using large language models (LLMs) to generate ideas for lessons, to help differentiate work for students with additional needs, to create example answers to exam questions for their students to assess against the mark scheme. Educators are creative people and whilst it is cool to see so many good uses of these tools, I wonder if the developers had solving specific problems in mind while creating them, or did they simply hope that society would find a good use somewhere down the line?

An educator points to an image on a student's computer screen.

Whilst there are good uses of GenAI, you don’t need to dig very deeply before you start unearthing some major problems. 

Anthropomorphism

Anthropomorphism relates to assigning human characteristics to things that aren’t human. This is something that we all do, all of the time, without it having consequences. The problem with doing this with GenAI is that, unlike an inanimate object you’ve named (I call my vacuum cleaner Henry, for example), chatbots are designed to be human-like in their responses, so it’s easy for people to forget they’re not speaking to a human. 

A photographic rendering of a smiling face emoji seen through a refractive glass grid, overlaid with a diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Social Media / CC-BY 4.0

As feared, since my last blog post on the topic, evidence has started to emerge that some young people are showing a desire to befriend these chatbots, going to them for advice and emotional support. It’s easy to see why. Here is an extract from an exchange between the presenters at the ChatGPT-4o launch and the model:

ChatGPT (presented with a live image of the presenter): “It looks like you’re feeling pretty happy and cheerful with a big smile and even maybe a touch of excitement. Whatever is going on? It seems like you’re in a great mood. Care to share the source of those good vibes?”
Presenter: “The reason I’m in a good mood is we are doing a presentation showcasing how useful and amazing you are.”
ChatGPT: “Oh stop it, you’re making me blush.” 

The Family Online Safety Institute (FOSI) conducted a study looking at the emerging hopes and fears that parents and teenages have around GenAI.

One quote from a teenager said:

“Some people just want to talk to somebody. Just because it’s not a real person, doesn’t mean it can’t make a person feel — because words are powerful. At the end of the day, it can always help in an emotional and mental way.”  

The prospect of teenagers seeking solace and emotional support from a generative AI tool is a concerning development. While these AI tools can mimic human-like conversations, their outputs are based on patterns and data, not genuine empathy or understanding. The ultimate concern is that this exposes vulnerable young people to be manipulated in ways we can’t predict. Relying on AI for emotional support could lead to a sense of isolation and detachment, hindering the development of healthy coping mechanisms and interpersonal relationships. 

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.
Image by Alan Warburton / © BBC / Better Images of AI / Virtual Human / CC-BY 4.0

Arguably worse is the recent news of the world’s first AI beauty pageant. The very thought of this probably elicits some kind of emotional response depending on your view of beauty pageants. There are valid concerns around misogyny and reinforcing misguided views on body norms, but it’s also important to note that the winner of “Miss AI” is being described as a lifestyle influencer. The questions we should be asking are, who are the creators trying to have influence over? What influence are they trying to gain that they couldn’t get before they created a virtual woman? 

DeepFake tools

Another use of GenAI is the ability to create DeepFakes. If you’ve watched the most recent Indiana Jones movie, you’ll have seen the technology in play, making Harrison Ford appear as a younger version of himself. This is not in itself a bad use of GenAI technology, but the application of DeepFake technology can easily become problematic. For example, recently a teacher was arrested for creating a DeepFake audio clip of the school principal making racist remarks. The recording went viral before anyone realised that AI had been used to generate the audio clip. 

Easy-to-use DeepFake tools are freely available and, as with many tools, they can be used inappropriately to cause damage or even break the law. One such instance is the rise in using the technology for pornography. This is particularly dangerous for young women, who are the more likely victims, and can cause severe and long-lasting emotional distress and harm to the individuals depicted, as well as reinforce harmful stereotypes and the objectification of women. 

Why we should focus on using AI as a problem-solving tool

Technological developments causing unforeseen negative consequences is nothing new. A lot of our job as educators is about helping young people navigate the changing world and preparing them for their futures and education has an essential role in helping people understand AI technologies to avoid the dangers. 

Our approach at the Raspberry Pi Foundation is not to focus purely on the threats and dangers, but to teach young people to be critical users of technologies and not passive consumers. Having an understanding of how these technologies work goes a long way towards achieving sufficient AI literacy skills to make informed choices and this is where our Experience AI program comes in. 

An Experience AI banner.

Experience AI is a set of lessons developed in collaboration with Google DeepMind and, before we wrote any lessons, our team thought long and hard about what we believe are the important principles that should underpin teaching and learning about artificial intelligence. One such principle is taking a problem-first approach and emphasising that computers are tools that help us solve problems. In the Experience AI fundamentals unit, we teach students to think about the problem they want to solve before thinking about whether or not AI is the appropriate tool to use to solve it. 

Taking a problem-first approach doesn’t by default avoid an AI system causing harm — there’s still the chance it will increase bias and societal inequities — but it does focus the development on the end user and the data needed to train the models. I worry that focusing on market share and opportunity rather than the problem to be solved is more likely to lead to harm.

Another set of principles that underpins our resources is teaching about fairness, accountability, transparency, privacy, and security (Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI) and higher education, Understanding Artificial Intelligence Ethics and Safety) in relation to the development of AI systems. These principles are aimed at making sure that creators of AI models develop models ethically and responsibly. The principles also apply to consumers, as we need to get to a place in society where we expect these principles to be adhered to and consumer power means that any models that don’t, simply won’t succeed. 

Furthermore, once students have created their models in the Experience AI fundamentals unit, we teach them about model cards, an approach that promotes transparency about their models. Much like how nutritional information on food labels allows the consumer to make an informed choice about whether or not to buy the food, model cards give information about an AI model such as the purpose of the model, its accuracy, and known limitations such as what bias might be in the data. Students write their own model cards based on the AI solutions they have created. 

What else can we do?

At the Raspberry Pi Foundation, we have set up an AI literacy team with the aim to embed principles around AI safety, security, and responsibility into our resources and align them with the Foundations’ mission to help young people to:

  • Be critical consumers of AI technology
  • Understand the limitations of AI
  • Expect fairness, accountability, transparency, privacy, and security and work toward reducing inequities caused by technology
  • See AI as a problem-solving tool that can augment human capabilities, but not replace or narrow their futures 

Our call to action to educators, carers, and parents is to have conversations with your young people about GenAI. Get to know their opinions on GenAI and how they view its role in their lives, and help them to become critical thinkers when interacting with technology. 

The post Why we’re taking a problem-first approach to the development of AI systems appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/why-were-taking-a-problem-first-approach-to-the-development-of-ai-systems/feed/ 1
How we’re learning to explain AI terms for young people and educators https://www.raspberrypi.org/blog/explaining-ai-terms-young-people-educators/ https://www.raspberrypi.org/blog/explaining-ai-terms-young-people-educators/#comments Tue, 13 Jun 2023 08:34:56 +0000 https://www.raspberrypi.org/?p=84142 What do we talk about when we talk about artificial intelligence (AI)? It’s becoming a cliche to point out that, because the term “AI” is used to describe so many different things nowadays, it’s difficult to know straight away what anyone means when they say “AI”. However, it’s true that without a shared understanding of…

The post How we’re learning to explain AI terms for young people and educators appeared first on Raspberry Pi Foundation.

]]>
What do we talk about when we talk about artificial intelligence (AI)? It’s becoming a cliche to point out that, because the term “AI” is used to describe so many different things nowadays, it’s difficult to know straight away what anyone means when they say “AI”. However, it’s true that without a shared understanding of what AI and related terms mean, we can’t talk about them, or educate young people about the field.

A group of young people demonstrate a project at Coolest Projects.

So when we started designing materials for the Experience AI learning programme in partnership with leading AI unit Google DeepMind, we decided to create short explanations of key AI and machine learning (ML) terms. The explanations are doubly useful:

  1. They ensure that we give learners and teachers a consistent and clear understanding of the key terms across all our Experience AI resources. Within the Experience AI Lessons for Key Stage 3 (age 11–14), these key terms are also correlated to the target concepts and learning objectives presented in the learning graph. 
  2. They help us talk about AI and AI education in our team. Thanks to sharing an understanding of what terms such as “AI”, “ML”, “model”, or “training” actually mean and how to best talk about AI, our conversations are much more productive.

As an example, here is our explanation of the term “artificial intelligence” for learners aged 11–14:

Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news. AI applications do not think. AI applications are built to carry out tasks in a way that appears to be intelligent.

You can find 32 explanations in the glossary that is part of the Experience AI Lessons. Here’s an insight into how we arrived at the explanations.

Reliable sources

In order to ensure the explanations are as precise as possible, we first identified reliable sources. These included among many others:

Explaining AI terms to Key Stage 3 learners: Some principles

Vocabulary is an important part of teaching and learning. When we use vocabulary correctly, we can support learners to develop their understanding. If we use it inconsistently, this can lead to alternate conceptions (misconceptions) that can interfere with learners’ understanding. You can read more about this in our Pedagogy Quick Read on alternate conceptions.

Some of our principles for writing explanations of AI terms were that the explanations need to: 

  • Be accurate
  • Be grounded in education research best practice
  • Be suitable for our target audience (Key Stage 3 learners, i.e. 11- to 14-year-olds)
  • Be free of terms that have alternative meanings in computer science, such as “algorithm”

We engaged in an iterative process of writing explanations, gathering feedback from our team and our Experience AI project partners at Google DeepMind, and adapting the explanations. Then we went through the feedback and adaptation cycle until we all agreed that the explanations met our principles.

A real banana and an image of a banana shown on the screen of a laptop are both labelled "Banana".
Image: Max Gruber / Better Images of AI / Ceci n’est pas une banane / CC-BY 4.0

An important part of what emerged as a result, aside from the explanations of AI terms themselves, was a blueprint for how not to talk about AI. One aspect of this is avoiding anthropomorphism, detailed by Ben Garside from our team here.

As part of designing the the Experience AI Lessons, creating the explanations helped us to:

  • Decide which technical details we needed to include when introducing AI concepts in the lessons
  • Figure out how to best present these technical details
  • Settle debates about where it would be appropriate, given our understanding and our learners’ age group, to abstract or leave out details

Using education research to explain AI terms

One of the ways education research informed the explanations was that we used semantic waves to structure each term’s explanation in three parts: 

  1. Top of the wave: The first one or two sentences are a high-level abstract explanation of the term, kept as short as possible, while introducing key words and concepts.
  2. Bottom of the wave: The middle part of the explanation unpacks the meaning of the term using a common example, in a context that’s familiar to a young audience. 
  3. Top of the wave: The final one or two sentences repack what was explained in the example in a more abstract way again to reconnect with the term. The end part should be a repeat of the top of the wave at the beginning of the explanation. It should also add further information to lead to another concept. 

Most explanations also contain ‘middle of the wave’ sentences, which add additional abstract content, bridging the ‘bottom of the wave’ concrete example to the ‘top of the wave’ abstract content.

Here’s the “artificial intelligence” explanation broken up into the parts of the semantic wave:

  • Artificial intelligence (AI) is the design and study of systems that appear to mimic intelligent behaviour. (top of the wave)
  • Some AI applications are based on rules. More often now, AI applications are built using machine learning that is said to ‘learn’ from examples in the form of data. (middle of the wave)
  • For example, some AI applications are built to answer questions or help diagnose illnesses. Other AI applications could be built for harmful purposes, such as spreading fake news (bottom of the wave)
  • AI applications do not think. (middle of the wave)
  • AI applications are built to carry out tasks in a way that appears to be intelligent. (top of the wave)
Our "artificial intelligence" explanation broken up into the parts of the semantic wave.
Our “artificial intelligence” explanation broken up into the parts of the semantic wave. Red = top of the wave; yellow = middle of the wave; green = bottom of the wave

Was it worth our time?

Some of the explanations went through 10 or more iterations before we agreed they were suitable for publication. After months of thinking about, writing, correcting, discussing, and justifying the explanations, it’s tempting to wonder whether I should have just prompted an AI chatbot to generate the explanations for me.

A window of three images. On the right is a photo of a big tree in a green field in a field of grass and a bright blue sky. The two on the left are simplifications created based on a decision tree algorithm. The work illustrates a popular type of machine learning model: the decision tree. Decision trees work by splitting the population into ever smaller segments. I try to give people an intuitive understanding of the algorithm. I also want to show that models are simplifications of reality, but can still be useful, or in this case visually pleasing. To create this I trained a model to predict pixel colour values, based on an original photograph of a tree.
Rens Dimmendaal & Johann Siemens / Better Images of AI / Decision Tree reversed / CC-BY 4.0

I tested this idea by getting a chatbot to generate an explanation of “artificial intelligence” using the prompt “Explain what artificial intelligence is, using vocabulary suitable for KS3 students, avoiding anthropomorphism”. The result included quite a few inconsistencies with our principles, as well as a couple of technical inaccuracies. Perhaps I could have tweaked the prompt for the chatbot in order to get a better result. However, relying on a chatbot’s output would mean missing out on some of the value of doing the work of writing the explanations in collaboration with my team and our partners.

The visible result of that work is the explanations themselves. The invisible result is the knowledge we all gained, and the coherence we reached as a team, both of which enabled us to create high-quality resources for Experience AI. We wouldn’t have gotten to know what resources we wanted to write without writing the explanations ourselves and improving them over and over. So yes, it was worth our time.

What do you think about the explanations?

The process of creating and iterating the AI explanations highlights how opaque the field of AI still is, and how little we yet know about how best to teach and learn about it. At the Raspberry Pi Foundation, we now know just a bit more about that and are excited to share the results with teachers and young people.

You can access the Experience AI Lessons and the glossary with all our explanations at experience-ai.org. The glossary of AI explanations is just in its first published version: we will continue to improve it as we find out more about how to best support young people to learn about this field.

Let us know what you think about the explanations and whether they’re useful in your teaching. Onwards with the exciting work of establishing how to successfully engage young people in learning about and creating with AI technologies.

The post How we’re learning to explain AI terms for young people and educators appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/explaining-ai-terms-young-people-educators/feed/ 1
Experience AI: The excitement of AI in your classroom https://www.raspberrypi.org/blog/experience-ai-launch-lessons/ https://www.raspberrypi.org/blog/experience-ai-launch-lessons/#comments Tue, 18 Apr 2023 10:00:00 +0000 https://www.raspberrypi.org/?p=83694 We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML). Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers…

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
We are delighted to announce that we’ve launched Experience AI, our new learning programme to help educators to teach, inspire, and engage young people in the subject of artificial intelligence (AI) and machine learning (ML).

Experience AI is a new educational programme that offers cutting-edge secondary school resources on AI and machine learning for teachers and their students. Developed in partnership by the Raspberry Pi Foundation and DeepMind, the programme aims to support teachers in the exciting and fast-moving area of AI, and get young people passionate about the subject.

The importance of AI and machine learning education

Artificial intelligence and machine learning applications are already changing many aspects of our lives. From search engines, social media content recommenders, self-driving cars, and facial recognition software, to AI chatbots and image generation, these technologies are increasingly common in our everyday world.

Young people who understand how AI works will be better equipped to engage with the changes AI applications bring to the world, to make informed decisions about using and creating AI applications, and to choose what role AI should play in their futures. They will also gain critical thinking skills and awareness of how they might use AI to come up with new, creative solutions to problems they care about.

The AI applications people are building today are predicted to affect many career paths. In 2020, the World Economic Forum estimated that AI would replace some 85 million jobs by 2025 and create 97 million new ones. Many of these future jobs will require some knowledge of AI and ML, so it’s important that young people develop a strong understanding from an early age.

A group of young people investigate computer hardware together.
 Develop a strong understanding of the concepts of AI and machine learning with your learners.

Experience AI Lessons

Something we get asked a lot is: “How do I teach AI and machine learning with my class?”. To answer this question, we have developed a set of free lessons for secondary school students (age 11 to 14) that give you everything you need including lesson plans, slide decks, worksheets, and videos.

The lessons focus on relatable applications of AI and are carefully designed so that teachers in a wide range of subjects can use them. You can find out more about how we used research to shape the lessons and how we aim to avoid misconceptions about AI.

The lessons are also for you if you’re an educator or volunteer outside of a school setting, such as in a coding club.

The six lessons

  1. What is AI?: Learners explore the current context of artificial intelligence (AI) and how it is used in the world around them. Looking at the differences between rule-based and data-driven approaches to programming, they consider the benefits and challenges that AI could bring to society. 
  2. How computers learn: Learners focus on the role of data-driven models in AI systems. They are introduced to machine learning and find out about three common approaches to creating ML models. Finally the learners explore classification, a specific application of ML.
  3. Bias in, bias out: Learners create their own machine learning model to classify images of apples and tomatoes. They discover that a limited dataset is likely to lead to a flawed ML model. Then they explore how bias can appear in a dataset, resulting in biased predictions produced by a ML model.
  4. Decision trees: Learners take their first in-depth look at a specific type of machine learning model: decision trees. They see how different training datasets result in the creation of different ML models, experiencing first-hand what the term ‘data-driven’ means. 
  5. Solving problems with ML models: Learners are introduced to the AI project lifecycle and use it to create a machine learning model. They apply a human-focused approach to working on their project, train a ML model, and finally test their model to find out its accuracy.
  6. Model cards and careers: Learners finish the AI project lifecycle by creating a model card to explain their machine learning model. To finish off the unit, they explore a range of AI-related careers, hear from people working in AI research at DeepMind, and explore how they might apply AI and ML to their interests.

As part of this exciting first phase, we’re inviting teachers to participate in research to help us further develop the resources. All you need to do is sign up through our website, download the lessons, use them in your classroom, and give us your valuable feedback.

An educator points to an image on a student's computer screen.
 Ben Garside, one of our lead educators working on Experience AI, takes a group of students through one of the new lessons.

Support for teachers

We’ve designed the Experience AI lessons with teacher support in mind, and so that you can deliver them to your learners aged 11 to 14 no matter what your subject area is. Each of the lesson plans includes a section that explains new concepts, and the slide decks feature embedded videos in which DeepMind’s AI researchers describe and bring these concepts to life for your learners.

We will also be offering you a range of new teacher training opportunities later this year, including a free online CPD course — Introduction to AI and Machine Learning — and a series of AI-themed webinars.

Tell us your feedback

We will be inviting schools across the UK to test and improve the Experience AI lessons through feedback. We are really looking forward to working with you to shape the future of AI and machine learning education.

Visit the Experience AI website today to get started.

The post Experience AI: The excitement of AI in your classroom appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/experience-ai-launch-lessons/feed/ 6
How anthropomorphism hinders AI education https://www.raspberrypi.org/blog/ai-education-anthropomorphism/ https://www.raspberrypi.org/blog/ai-education-anthropomorphism/#comments Thu, 13 Apr 2023 14:59:33 +0000 https://www.raspberrypi.org/?p=83648 In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough…

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
In the 1950s, Alan Turing explored the central question of artificial intelligence (AI). He thought that the original question, “Can machines think?”, would not provide useful answers because the terms “machine” and “think” are hard to define. Instead, he proposed changing the question to something more provable: “Can a computer imitate intelligent behaviour well enough to convince someone they are talking to a human?” This is commonly referred to as the Turing test.

It’s been hard to miss the newest generation of AI chatbots that companies have released over the last year. News articles and stories about them seem to be everywhere at the moment. So you may have heard of machine learning (ML) chatbot applications such as ChatGPT and LaMDA. These applications are advanced enough to have caused renewed discussions about the Turing Test and whether the chatbot applications are sentient.

Chatbots are not sentient

Without any knowledge of how people create such chatbot applications, it’s easy to imagine how someone might develop an incorrect mental model around these applications being living entities. With some awareness of Sci-Fi stories, you might even start to imagine what they could look like or associate a gender with them.

A person in front of a cloudy sky, seen through a refractive glass grid. Parts of the image are overlaid with a diagram of a neural network.
Image: Alan Warburton / © BBC / Better Images of AI / Quantified Human / CC BY 4.0

The reality is that these new chatbots are applications based on a large language model (LLM) — a type of machine learning model that has been trained with huge quantities of text, written by people and taken from places such as books and the internet, e.g. social media posts. An LLM predicts the probable order of combinations of words, a bit like the autocomplete function of a smartphone. Based on these probabilities, it can produce text outputs. LLM chatbot applications run on servers with huge amounts of computing power that people have built in data centres around the world.

Our AI education resources for young people

AI applications are often described as “black boxes” or “closed boxes”: they may be relatively easy to use, but it’s not as easy to understand how they work. We believe that it’s fundamentally important to help everyone, especially young people, to understand the potential of AI technologies and to open these closed boxes to understand how they actually work.

As always, we want to demystify digital technology for young people, to empower them to be thoughtful creators of technology and to make informed choices about how they engage with technology — rather than just being passive consumers.

That’s the goal we have in mind as we’re working on lesson resources to help teachers and other educators introduce KS3 students (ages 11 to 14) to AI and ML. We will release these Experience AI lessons very soon.

Why we avoid describing AI as human-like

Our researchers at the Raspberry Pi Computing Education Research Centre have started investigating the topic of AI and ML, including thinking deeply about how AI and ML applications are described to educators and learners.

To support learners to form accurate mental models of AI and ML, we believe it is important to avoid using words that can lead to learners developing misconceptions around machines being human-like in their abilities. That’s why ‘anthropomorphism’ is a term that comes up regularly in our conversations about the Experience AI lessons we are developing.

To anthropomorphise: “to show or treat an animal, god, or object as if it is human in appearance, character, or behaviour”

https://dictionary.cambridge.org/dictionary/english/anthropomorphize

Anthropomorphising AI in teaching materials might lead to learners believing that there is sentience or intention within AI applications. That misconception would distract learners from the fact that it is people who design AI applications and decide how they are used. It also risks reducing learners’ desire to take an active role in understanding AI applications, and in the design of future applications.

Examples of how anthropomorphism is misleading

Avoiding anthropomorphism helps young people to open the closed box of AI applications. Take the example of a smart speaker. It’s easy to describe a smart speaker’s functionality in anthropomorphic terms such as “it listens” or “it understands”. However, we think it’s more accurate and empowering to explain smart speakers as systems developed by people to process sound and carry out specific tasks. Rather than telling young people that a smart speaker “listens” and “understands”, it’s more accurate to say that the speaker receives input, processes the data, and produces an output. This language helps to distinguish how the device actually works from the illusion of a persona the speaker’s voice might conjure for learners.

Eight photos of the same tree taken at different times of the year, displayed in a grid. The final photo is highly pixelated. Groups of white blocks run across the grid from left to right, gradually becoming aligned.
Image: David Man & Tristan Ferne / Better Images of AI / Trees / CC BY 4.0

Another example is the use of AI in computer vision. ML models can, for example, be trained to identify when there is a dog or a cat in an image. An accurate ML model, on the surface, displays human-like behaviour. However, the model operates very differently to how a human might identify animals in images. Where humans would point to features such as whiskers and ear shapes, ML models process pixels in images to make predictions based on probabilities.

Better ways to describe AI

The Experience AI lesson resources we are developing introduce students to AI applications and teach them about the ML models that are used to power them. We have put a lot of work into thinking about the language we use in the lessons and the impact it might have on the emerging mental models of the young people (and their teachers) who will be engaging with our resources.

It’s not easy to avoid anthropomorphism while talking about AI, especially considering the industry standard language in the area: artificial intelligence, machine learning, computer vision, to name but a few examples. At the Foundation, we are still training ourselves not to anthropomorphise AI, and we take a little bit of pleasure in picking each other up on the odd slip-up.

Here are some suggestions to help you describe AI better:

Avoid usingInstead use
Avoid using phrases such as “AI learns” or “AI/ML does”Use phrases such as “AI applications are designed to…” or “AI developers build applications that…
Avoid words that describe the behaviour of people (e.g. see, look, recognise, create, make)Use system type words (e.g. detect, input, pattern match, generate, produce)
Avoid using AI/ML as a countable noun, e.g. “new artificial intelligences emerged in 2022”Refer to ‘AI/ML’ as a scientific discipline, similarly to how you use the term “biology”

The purpose of our AI education resources

If we are correct in our approach, then whether or not the young people who engage in Experience AI grow up to become AI developers, we will have helped them to become discerning users of AI technologies and to be more likely to see such products for what they are: data-driven applications and not sentient machines.

If you’d like to get involved with Experience AI and use our lessons with your class, you can start by visiting us at experience-ai.org.

The post How anthropomorphism hinders AI education appeared first on Raspberry Pi Foundation.

]]>
https://www.raspberrypi.org/blog/ai-education-anthropomorphism/feed/ 5