Teaching AI literacy with Teachable Machine (and a small connection to education)

At every Artificial Intelligence (AI) conference I attend, a speaker will mention “People always ask me, is AI going to take over the world/our jobs?”

This highlights the importance of understanding AI literacy, not simply to reduce the number of people who have such misconceptions, but also to allow people to understand the limitations of AI so that they can ask better questions when working with AI.

Hence, at a Teacher’s Conference workshop recently, I segued from talking about how computer vision can aid in recycling to a short lesson on AI literacy.

To set the context, participants of the workshop had no prior/superficial experience in AI and Machine Learning (ML). In that workshop, I introduced Teachable Machine, which is a web-based tool that makes creating machine learning models fast, easy, and accessible to everyone. You can click on the link below to explore their website or watch the YouTube video below.

We explored creating an Image Project within Teachable Machine so that we can do image classification. We provided the participants with a few Hello Panda packaging boxes of various colours and provided a few plastic bottles to train their models.

The following images show the process of training the model. It trains on the browser so a simple internet connection and a browser on a laptop/PC will suffice. No special apps are required.

Training each category of items for classification
Training the model to recognise images of plastic and bottles
Testing out the model

Initially, the participants followed the instructions and were trained on plastic bottles and cardboard packaging. And they started to explore scanning other items not given to them, like their water bottles). That was when the questions started pouring in, and the lesson segued into a chat about AI literacy.

It all started when one of the groups started training the model with a pink Hello Panda packaging and tested the model with their pink metal bottle. When the pink metal bottle gave close to 100% confidence that its material was cardboard, the group started to ask if the model learns by colour.

I then told the group to try training the cardboard class with both a pink and red Hello Panda packaging and observe the change in results. As expected, the machine predicted that the pink metal bottle (unseen object) was cardboard with lower confidence.

From that example, the following points were highlighted to the participants.

The model is only as good as the data fed to it

As the saying goes, “garbage in, garbage out.” When we do not provide good data, the model might not be robust enough to generalise towards the correct prediction. Even with the rise of zero-shot classifiers, if the label is not provided, the model can at best try to guess and place an item into a specific category. For instance, the model probably started off by thinking that cardboard is pink. But with more examples about what cardboard looks like, it can learn to generalise and pick out other features that tell us it is cardboard. In addition, we know that materials are not simply classified by how they look, but how they feel too. Hence, perhaps using a multi-modal model might be more effective.

In supervised training, how the data is labelled might introduce errors

If you haven’t realised, the labels in the pictures above are incorrect (plastic and cardboard were wrongly labelled). If data is labelled incorrectly, the model will give an incorrect classification.

Analogy

Since it is a room of teachers, I used the following analogy to discuss our observations. Teachers often curate resources to teach our students, and we provide multiple examples to teach our students. We can control our examples and the test we set for them, but we cannot control what goes in their head. But when we test them, we get to identify/guess what their misconceptions are, are perhaps tweak the data to fine-tune their understanding. And we will follow up by testing them again to see if they have learnt. But when we teach them the wrong content, they will definitely learn something wrong. That’s when we will need to retrain the model with the right examples. However, the only difference is that I can retrain a model from a checkpoint but I cannot ask the students to simply forget what they have learnt and reset from a certain point.

Implications

Naturally, the next question will be what the implications are if we were to trust models without having a human in the loop. Typical case studies include loan approvals, job applications, crime prediction and insurance suitability. If the data provided is biased towards or unrepresentative of different groups of people, we might end up with a model that discriminates. If we teach a large language model (LLM) with a lot of online text with negative, vulgar and gender/race-biased comments, the downstream tasks might generate such texts too. Therefore, much research has been poured into how to clean up the data to get rid of biases (which in its own form is subjective as it depends on the people cleaning the data).

I am glad that the session segued towards a discussion about AI literacy. Perhaps this can be introduced in the curriculum to let students explore and learn more about the limitations of AI so that they can start asking better questions about the model and the data.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Chan Kuang Wen

I enjoy programming and creating tools that increase efficiency. I am passionate about education and learning experiences, especially on sustainability.