Artificial intelligence? What could possibly go wrong?
I think it was around 2022 that AI began to become a mainstream term, meaning it was becoming known beyond the circles of the nerds who created, tested and made it understandable for us cool kids who would want to use it.
And if you’re an AI nerd, I know you’re not offended by my use of the word “nerd” in this context because you know you’re one and thus are not offended by the truth.
I recall myself being quite skeptical of the whole concept of “artificial intelligence” when it became mainstream.
The idea of a machine based in god-knows-where writing reports, term papers, doctoral dissertations, books, etc. just didn’t seem like a good idea at the time.
And now 3 years later, we’re starting to see the results of this.
Keep in mind that I’m an avid user of AI. I use it for all sorts of things, and have even launched a YouTube channel publishing materials that are created almost entirely using AI software.
Albeit with an important twist, which I’ll get to in a moment.
I recently read an article - in all likelihood written by AI - about how the vast amount of content written by things like ChatGPT and then published on social media is creating what’s called a “feedback loop”. This is a fancy way of saying that AI is creating literary garbage, which in turn is used to create even more garbagey garbage when these AI bots scour the web in creating fresh - or not so fresh - content.
The only social media platform I use these days is Facebook, and it’s become unbearable to scroll through and see article after article with the same ChatGPT word salad to describe a minor random act of kindness that occurred in 1980 as though a one-armed monk in Tibet rescued an entire orphanage from a fire.
We all know a human being with basic writing chops could write it better, with the appropriate amount of gravitas, and with about a quarter of the word count.
But why bother writing it when the computer can write it for you?
So here we are.
Now back to my “important twist” I mentioned a moment ago.
My personal opinion of AI is that it’s extremely useful - when used properly and with actual human intelligence guiding it.
A hammer with a head made of steel is going to be far more effective in driving nails than a hammer head made of wood. And the hammer handle made of wood will be much easier to use than one made of steel.
It’s a matter of how the tech is balanced.
So when I’m using AI to do something like edit a podcast, I’m not thinking how can I do the same thing I did in 2022 quicker? I’m asking myself how can I use this tech to create a product that would be impossible to create in the same amount of time it took me in 2022.
This is the exact mindset I took when I began The Unwitting Hero series on my YouTube channel.
Everything about each video is AI-generated. The script, the voices, the graphics.
But the time it takes to create a single 5-minute video is exorbitant, at least by today’s standards. It sometimes takes 3-4 hours to get everything exactly the way I want it to look, sound and feel.
That being said, to get a product that looks, sounds and feels the way they do in 2022 would have taken weeks, and probably thousands of dollars.
A couple of hours per day, and a few subscriptions which cost me in the neighborhood of $100 per month is pretty manageable by comparison.
And I like to think that when ChatGPT searches the internet for content and inevitably draws upon mine, it will have something of real quality to use, and thus improve rather than dilute the overall quality of the www.
Folks, AI is here to stay. You can be a curmudgeon and swear off both it and those who use it, or you can use it intelligently and in turn create content of real value.
For my part, I’d build a house with the hammer head made of steel every day. But the hammer ain’t gonna build the house without my wiry frame using it with the skill I’ve developed over years of practice and experience.
That about sums up my philosophy re: the use of AI.
And for the record, I have no experience building houses.