AI can be a smart friend or a crazy uncle
Today I think of generative AI as a smart friend who often gets things right. The kind of friend with diverse interests, has read widely, and even tried to connect the dots between different disciplines.
This kind of friend is useful for people who need to quickly understand a new technology or process—but also need to bring a bit of skepticism to their inquiry.
As a team of copywriters, we write copy for diverse clients, which means we sometimes move quickly to get up to speed on a new industry or discipline. We typically read articles and scientific papers and interview experts as we start to form opinions, but before we type letters across screens.
ChatGPT, the AI I’ve been playing with, can help with those initial impressions—here’s how:
- If I truly know nothing about a topic, Wikipedia is a starting point that gives a broad overview and highlights areas of disagreement. I happen to like their format and the bibliography at the end.
- After that, I read articles and scientific journals. Then conduct interviews to fill in the gaps with specific domain knowledge.
- Then I mouse over to the ChatGPT button to compose my best, maddeningly specific question to ask this smart friend.
It turns out that it helps to know a bit about a topic before asking ChatGPT for specifics. Once I know a bit about the topic, ChatGPT is useful (and perhaps feels safer) because I can start to see which answers may be more fiction than truth.
AI is a good beginning and a suspect ending
ChatGPT reminds me of a decades-ago trip around a South Asian country. My new wife and I had not mastered the language, and the signage was entirely oblique to us. So at each train station, we asked a handful of people which was our train and then aggregated the responses to get a consensus. We then arrived, for the most part, where we needed to go.
Same with ChatGPT: a lot of it looks right because I have checked with enough experts to get a general sense of what to trust. And there are parts (and conclusions) I trust less—like edging away from the table when the crazy uncle spews forth
Your mileage may vary
The caveats for ChatGPT (listed under “Limitations” on their website) are useful:
- “May occasionally generate incorrect information”
- “May occasionally produce harmful instructions or biased content”
- “Limited knowledge of world and events after 2021”
In fact, those instructions are not so far off for any information I receive from anyone. To get true knowledge, we must always verify with other sources. It is good to keep an eye on what our source knows—their particular focus—and also to know what is outside their focus.
I read a lot of hand-wringing about how this smart friend, ChatGPT, or other types of generative AI are coming for our jobs. But until that happens, it makes sense to experiment with the technology to see how and when it might help jump-start the craft we practice.