I recently heard someone ask, “What’s next with ChatGPT?” This was a person who doesn’t have time to follow the news about generative AI on a regular basis.

So for those who haven’t had time to keep up, here are some interesting developments.

I’m not aiming to predict the future (well maybe a little bit), only to summarize some news that might be of interest if you haven’t had time to keep up with this topic.

1. The free version of ChatGPT will connect to the internet, making hallucination a bit less of a problem. Currently a web connection is only available in the Plus version ($20/month)

2. Language models from Open AI, Microsoft, Google, and others will be incorporated in many tools that people already use (Microsoft Copilot for Windows 11, Google docs, etc). See also The AI revolution is about to take over your web browser.

3. Educators will find more ways to use it creatively with students in the classroom. And some are moving beyond worrying about students writing essays with ChatGPT. See AI detectors: Why I won’t use them. See also Why Is My Attitude Towards Generative AI Different From Previous AI in Education? And also 6 Tenets of Postplagiarism: Writing in the Age of Artificial Intelligence.

4. The amount of text you can work with at one time will keep increasing, allowing you to work with the text of an entire book (Anthropic’s Claude already does this). You can summarize, outline, ask questions of the book, and so on.

5. Many more software tools and apps will continue to be created (some useful, some not) that use this technology (large language models). Some useful ones: Duolingo, Be My Eyes, Elicit: The AI Research Assistant, Explainpaper (for reading research papers), for just a few. See this TED Talk from Sal Khan, How AI could save (not destroy) education.

6. Plugins will be incorporated into Bing Chat and Google’s Bard. (The Plus version of ChatGPT already has plugins). However, they don’t always seem to work well yet. Listen to this podcast episode: Are ChatGPT Plugins Overhyped?

7. There will be some progress with making AI less biased and more multicultural. Some of the open source models  will do this (see BLOOM). Another model (not open source) that shows promise is Claude from Anthropic, with their method known as “constitutional AI.

8. Copyright issues will not be solved for a while, as it takes time for the courts to decide how to interpret the law. See this video for a summary: Generative AI Meets Copyright, by Pamela Samuelson, Professor of Law, UC Berkeley. See also this interesting development: Japan Goes All In: Copyright Doesn’t Apply To AI Training. Another article: Creative Commons makes the case that training AI models is fair use, see Fair Use: Training Generative AI.

9. Large companies will connect language models to their own knowledgebases (known as grounding), with very little or no hallucination. See Bloomberg Uses Its Vast Data To Create New Finance AI, and Introducing Healthcare-Specific Large Language Models from John Snow Labs, and Ethics of large language models in medicine and medical research.

10. Have you heard about autonomous agents? These are AIs that can work towards a goal that you set, and assign different tasks to other AIs. See Auto-GPT, BabyAGI, and AgentGPT: How to use AI agents and How to Use AgentGPT to Deploy AI Agents From Your Browser, and What is Auto-GPT and why does it matter?

11. And farther out in the future, there will be multi-sensory AI that can use multiple senses. See ImageBind from Meta. This open source model combines text, audio, visual, movement, thermal, and depth data. It’s only a research project for now, but you can imagine how this might work in future models.

12. If you’ve heard all the news of AI experts signing statements about risks of AI causing human extinction, you might wonder… is that another thing we need to worry about in addition to climate change and nuclear war? Here is the best response to that that I’ve read: Let’s talk about extinction by Azeem Ahzar. A good point he makes is that “just because people are experts in the core research of neural networks does not make them great forecasters, especially when it comes to societal questions or questions of the economy, or questions of geopolitics.”

13. And finally, people will continue to make serious mistakes in their use of ChatGPT, (until more people develop some degree of AI literacy). See Lawyer cites fake cases invented by ChatGPT, judge is not amused, and Professor Flunks All His Students After ChatGPT Falsely Claims It Wrote Their Papers.

This is why we all need AI Literacy. If you want to develop your own AI literacy (and teach others to do the same), take a look at my May 18 webinar recording, AI Literacy: Using ChatGPT and AI Tools in Instruction. See also the handout for many more sources.

I’ll be doing a similar webinar for AMICAL on June 21, and I’m also talking with ALA about a series of webinars for later this year.

Follow me on Twitter or Mastodon for daily links to AI news.