Post

A Video Introduction to Artificial Intelligence

An Introduction to Artificial Intelegence

This video was created for the Leys AI group, which is a group formed at my school to discuss the topic. I wrote, shot, and edited the video in a single afternoon, so it's not my greatest work. Despite the quick turnaround, this video was awarded a headmaster's commendation.

Transcript

Hello everybody, welcome to the first Leys Artificial Intelligence Group meeting.

Mr Howe asked me to start off today with a short introduction to the concept of AI, and what we really mean when we say artificial intelligence. I’m not going to spiel on about the super nitty gritty technical details since what we’re here for is an understanding technical enough for us to figure out new ways we can use this incredible tool in everyday life at the Leys. Artificial intelligence is a very open term but it is typically defined as “developing computer systems that can perform tasks which typically require human intelligence.” Okay but what does that actually mean?

The tools you’re all probably familiar with like ChatGPT, Dall-E, and perhaps Google’s Bard are all a form of generative AI. The way these systems work is by being trained on extremely large sets of data, called datasets. A machine learning tool will be given a set of rules and a set of data and it will link rules and patterns to the data. Think of it this way, if you gave the machine learning tool a dataset full of pictures of people, half with blue eyes, half with brown eyes and you also gave the tool each person’s respective genetic makeup, the AI would be able to determine if a person would have brown eyes or blue eyes just with their genetic makeup. The AI has no concept of human biology and doesn’t even understand the concept of genetics but it can simply identify a pattern and use that in new scenarios. We call this a neural network and it works in a very similar way to how our brain does, capturing data and linking it to other data that it has already captured. Over time these neural networks can become pretty good at generating their own data based on the information they’ve been fed. This leads us to our present day with what is known as a large language model, effectively an AI that has been trained on billions of pieces of human writing and can now generate its own, the most popular example of this being ChatGPT.

It would be unfair, of course, to only talk about large language models, since these only refer to the text-based tools. Image generation tools are known as large visual models and tools that perform tasks for you are known as large action models.

This technology is evolving day by day and tools keep getting better and better. For example, only a couple of weeks ago Google released the latest version of their large language model named Gemini and it’s incredible. The headlining change is not an increase in its dataset but rather an increase in the context window to 10 million tokens (pieces of data) compared to GPT-4’s measly 128 thousand tokens. What does that actually mean? Well, you might have pasted a document into ChatGPT and found that it can’t accurately pull data from all of it and it might forget earlier parts of your conversation, and that’s because of its low context window at around only 16 thousand tokens for normal ChatGPT, known as GPT-3. But with Google’s new Gemini, it can remember and pull from a full hour of video, 11 hours of audio, or over 700 thousand words. This kind of stuff could be game-changing in education, where a model like Gemini could scrape through multiple textbooks to give you the most accurate answer to a question according to your exam board specifications.

AI is not just the future but it’s happening right now so it’s imperative we discuss it. Thank you.

note: appology for the lack of sources this project was completed before the creation of Monty Published

This post is licensed under CC BY 4.0 by the author.