As companies roll out their responses to OpenAI’s ChatGPT—such as Google’s Bard and Snap’s My AI—alarm bells are starting to ring. When AI pioneer Geoff Hinton quit Google this May to express concerns about AI growth, it felt like a scene straight out of a supervillain movie. But, in this case, we still have to decide who’s the bad guy.
We know young people are using this technology. When we checked in with the Receipt, our global network of 9,000+ Gen Zers, we found that 72% of respondents have used ChatGPT — with nearly 1 in 5 in this group using it at least daily.
Those using the service go to it most commonly for work tasks (38%), for getting questions answered (20%), or just for fun (13%).

As Gen Z uses ChatGPT for both work and play, we’re also figuring out how to do so responsibly. Gen Z has grown up in the era of creating technology because we can, not necessarily because we should. So, we’re asking tough questions like how we cite work that uses ChatGPT and how we contend with the biases that ChatGPT perpetuates.
How it works
Whether you’re using ChatGPT to write email copy or ‘your mom’ jokes, the question of who owns those thoughts is an important one.
In order to know who gets to claim ChatGPT’s outputs, we have to know where they come from. Simply put, ChatGPT uses a mega-statistical model to string thoughts together scraped from billions of media moments. Think, every tweet about Dramageddon and SNL cold opens being consumed by a massive database. (Truly terrifying). The AI then uses probability to string new outputs together. Although, unlike simple search queries, ChatGPT is able to remember what you’ve talked about before.
All of this to say, ChatGPT isn’t copying and pasting answers from the internet; it’s using billions of data points to take a guess at creating something new.
However, there is one key difference between AI and a person: accountability. An algorithm isn’t to blame when the guess goes wrong, people are.
Reckoning with a Flawed System
We know AI carries with it the same flaws and biases of its creators, and these flaws are the most human thing about it.

Photo Caption: How did ChatGPT get this so wrong? ChatGPT functions off of a probabilistic model, which means it’s taking a guess at facts based on how we’ve discussed them in our own media and data points. It all comes down to ChatGPT’s best guess.
In an article written by The Guardian, AI expert Meredith Broussard says, “I’m arguing that racism, sexism, and ableism are systemic problems that are baked into our technological systems because they’re baked into society.”
The issue at hand is not only the threat of spreading misinformation but also the impossible task of identifying who can be held accountable, especially when it relates to marginalized groups. It is sort of like asking who is responsible for institutionalized racism.

Photo Caption: When asked about the top Black creators on TikTok, ChatGPT failed to list exclusively Black creators. Not only is this factually incorrect, it also demonstrates the biases retained even in open AI systems.
ChatGPT, while an interesting approach to democratizing fast and efficient information, is also a vulnerable place for those who operate within one or many underrepresented groups of people.
Road to responsible AI
When we copy and paste results from AI into our professional and academic work, we are giving control over the narrative to a technology that lacks the ability to take responsibility for its output. Misinformation and cultural misrepresentation are just a couple of natural consequences of an amoral platform designed by imperfect (and sometimes immoral) humans.
So, it becomes our responsibility as users and contributors to AI platforms to 1) fact-check information gathered from platforms like ChatGPT, 2) insert our own critical understanding of issues into the output, and 3) take responsibility when things go wrong.
JUV Data Collection (The Receipt):
ChatGPT: March 30th, 2023 – 113 Respondents