This week we'll be exploring the idea of AI and truth. Does AI tell the truth? What is truth anyway? What can you do to make sure your reputation isn't damaged?
Wait! Before we get to the good stuff...
Do you like my blog? If you do make sure you subscribe to my email list so you don't miss next week's post, and thank you from the bottom of my heart for reading!
What did you think the first time you heard of Chat GPT?
The first time I heard of Chat GPT was a random Thursday afternoon in Notting Hill in London.
At that point in my life I would go to a cafe every morning to get a double shot of espresso and smash out work, that day I uncharacteristically needed a second hit and went to a tea shop I liked for a matcha.
Annoyingly, the shop’s seating was closed, which I only realised after ordering the drink. I decided to go to another cafe across the road called Natoora to sit and get some work done. They’re also a grocery store so I knew if I bought food they wouldn’t mind.
The shop was busy when I walked in but I luckily snagged the last table, second from the window. As I plugged away at work at some point I struck up a conversation with the guy next to me. I forget how we started talking, but we chatted for about an hour while he showed me his website designs, told me his entire life story and his plans for the future, but the best bit of the conversation came towards the end when he asked “have you heard of Chat GPT?”
“No, what’s that?” I said.
“I’ll show you.”
I proceeded to watch him write an email to an employee acting as a legal team to stop watching ‘inappropriate content’ on his laptop. We laughed and did a bunch of other silly stuff with it, but my life was forever changed.
I now use Chat GPT regularly for writing, for answers and for ideas when I feel stuck, but I’m torn in thinking about the service as a force for good. With AI this smart, people are getting fooled left right and centre by scammers. I remember there was one TIk Tok showing AI fooled parents into believing their children were kidnapped.
The truth was already hard to identify, AI makes it even harder and allows people who act in ways detrimental to society to multiply.
Last Saturday led me down a path thinking about the ramifications of AI further.
In a lot of ways, humans are playing God creating AI. Let that sit.
Fast forward to an interesting article from the New York Times
I had a call with John Cunningham, who is a nationally recognized section 199A and LLC lawyer that I was introduced to through my father Edward Gainor who is an insurance lawyer in the States, and domestically recognised by me as the best dad ever.
As John and I began speaking, I quickly realised how interesting he is. John and I talked for over an hour on what was meant to be a thirty minute call about setting up my LLC.
A few days later, John sent me an article from the New York Times called “How to Live a Happy Life, From a Leading Atheist” which interviewed Daniel C. Dennett who “has been right in the thick of some of humankind’s most meaningful arguments: the nature and function of consciousness and religion, the development and dangers of artificial intelligence and the relationship between science and philosophy, to name a few.”
(Dennett Pictured)
Dennett brought up sentient AI. He said “Does that imply that there’s nothing stopping A.I., which we currently think of as more capable of competence rather than true comprehension, from becoming sentient? Yes, strong A.I. is possible in principle.”
Which frankly - freaked me out.
Sentient AI and Truth
Sentience is the ability to feel and perceive the self. Even if AI becomes filled with emotion or human-likeness, I question whether the AI will be more accurate, or accurate in responding in a similar way to their maker.
Dennett’s interview reminded me that truth is difficult to identify in the first place, and relies on humans constantly seeking something that is elusive. Truth can be distorted and hidden by clever marketing (called by other names like propaganda in the past) and mass ideologies. What’s more is that objective vs. subjective truth makes finding one truth impossible.
The problem with AI is people tend to take what computers say as absolute truth, and although computers are accurate in doing what they are meant to do, someone still has to programme them to do it based on the data available (Harvard Business Review). Most people aren’t going to programme AI to lie, but everyone has blind spots. As AI gets closer to sentience, and we embed AI into our culture, it’s important you and I continue to think critically and treat Chat GPT and other AI services as the word of a smart stranger rather than gospel.
How to manage risks from AI
AI comes with risks. AI, in my opinion, is no worse or better for truth than the humans that use them and should be treated accordingly. That means testing, checking and asking for feedback like you would if the work you received was from a human.
Next time you use AI, think about the data the programme is using, who made it and how much trust matters in the project you’re doing. If you’re writing an email or doing a social post, AI could be a great tool for you, but if you’re doing a project or reading an article on a political issue, where objective truth matters and your reputation is on the line, test, check and get feedback.
댓글