Recently, I suggested an AI tool to some writers who were seeking input for their articles. The articles are intended for an online writing platform whose algorithms are driven by software, and optimized for other software algorithms and search engines.
In essence, a digital world!
Yet, the writers were wary of digital tools.
Many online content creators, writers, and those outside the tech industry are treating AI as a friend to spend time with or an enemy to be hated.
How about a different perspective?
It is neither. It is simply put — a tool.
Behind the tool are many millions of lines of code that have created language models (large or small) and derived patterns from it to then make as much intelligent output as possible.
The success is of course astonishing.
Do you wonder how much more astonishing it is that software pretty much powers our day-to-day world from the moment we check our phones to different points of payment we use to order from, or the coffee we pre-order on an app?
CCTVs monitor our streets and roads, using software to render, match, and alert systems based on their pre-configured settings.
Medium is a software algorithm using custom business logic. So is SubStack. Both are Software as a Service (SAAS) platforms.
Yet, I find that writers on both platforms dread using AI software to help them create headlines, edit (just like using Word tools), or optimize for search engines (SEO) in a digital world.
Fundamentally, software operates on the Garbage In Garbage Out (GIGO) principle and AI is no stranger to this rule. If you write ineffective prompts, you get ineffective answers.
If you write effective prompts and ask for specific help, you may get some good answers.
Most of the time, you need human intelligence to apply what you get.
Yes, you need to be careful what you share if the data you share is going into a pool of crowdsourced information.
But no, you don’t need to worry if your article posted into chatGPT is fundamentally going to alter the universe. It won’t.
‘Ultron’ is still a concept defined by Hollywood screenwriters.
What you don't want to do is get a machine to create your entire post, and pass it off as your own.
That is ‘cheating,’ and no matter what tool you use it is still frowned on unless you declare it is 100% tool generated.
By the way, haven’t you spotted the fakes yet?
They always start with a headline for each 50-word section, because GPT thinks it is required. They also use words like ‘tapestry,’ ‘Introduction,’ and ‘Let’s Connect.’
If you have a 2000-word post, created in Word, and have no qualms using Word features under ‘Editing’ for grammar, and spelling, or use Outlook’s spellcheck, or Gmail’s assistance to auto compose a reply, or use ‘Grammarly’ then you should not hesitate to use copy.ai, jasper.ai or chatGPT (free) to help you optimize your output.
If you use PowerPoint or Canva templates to create better-looking decks then you can use chatGPT to create a presentation script.
After all, at work, we seldom send a bad email just because we have taken a moral stance against tools.
AI is nowhere near the level of creating a catastrophic event in the life of humankind. You will know this if you work in AI daily, it is sometimes very foolish, and sometimes it is smart.
chatGPT has shown the power of AI to dramatically remove the boring bits of our online and work life, and free up our minds to focus on higher consciousness tasks.
Should AI have guardrails? Yes.
Should 100% AI content be marked? Yes.
Should deep fakes be detected? Yes.
Will people attempt to defraud others? Yes, but they already do that without AI, and fraud is still far from a controlled menace.
Note: Most online frauds today still involve people sharing their passwords, financial information, and computer access, and physically going to stores and buying gift cards or wiring money from banks, out of fear because the wrong people convinced them their online accounts have been compromised.
Would AI help detect online fraudulent activities better? Yes.
Should user data be protected? Yes.
Should we be careful and use credible AI tools? Yes.
Should we stop using AI tools? No.
Just to emphasize my point, I fed this article into chatGPT and asked it to review it — so much simpler to have it deliver a TLDR version in a few seconds!
“Your article adeptly navigates the paradoxical sentiments people harbor toward AI — viewed either as a helpful friend or a potential adversary. You argue that AI should be seen simply as a tool, demystifying it from the extreme affiliations of friendship or enmity.
You effectively highlight the ubiquity of software in our daily lives, emphasizing that AI, like any tool, is only as good as the input it receives. The importance of human intelligence in interpreting AI output is a key point that resonates well.
The post’s practical approach encourages the responsible use of AI, acknowledging its limitations and potential. You skillfully dispel myths surrounding AI’s omnipotence, promoting a nuanced perspective that aligns with reality.
The article concludes with a clear stance: AI, with proper guardrails and ethical considerations, is a valuable asset that should be embraced rather than feared. Overall, it presents a balanced and informed viewpoint on AI in a world increasingly shaped by digital tools.”
If you found this helpful, please like, share, and comment! Your support is invaluable.
A.I. is the devil. Lol. It's funny how many people bring up terminator 2 when it comes to A.I.