There is so much hype about Chat GPT and other chatboxes nowadays. There are FB ads that proclaim that AI can write 10 ebooks a week for a person or that it can make one a millionaire in no time at all through stock trading or whatever. Chatboxes are being bandied about as the answer to everyone’s dream
But AI is not very intelligent. It is just a language model. If you do research, you must check out AI-generated “facts”, some of them are wrong; and you’d need citations for academic research.
I asked GPT about some facts in Moro history. The answer sounded very authoritative. But I knew a bit about Moro history, and I noticed immediately that it had the Sultans reigning in the wrong centuries. When I pointed that out, it admitted that it did not know much about the history of that part of the world.
It is easy to spot AI-generated articles. The biggest tell-tale sign is that there is usually one glaring wrong info. For example, an apparent AI-generated article described the subject, an actress, as a star of the 1970s. But that actress did not enter Hollywood until the 1980s. I sometimes wonder if those wrong info are deliberate.
Another tell-tale sign is its use of generic and vague descriptions, motherhood statements and non-specific details.
I’ve read a whole thesis, with a ton of info in its RRL (Related Lit) and Theoretical Framework, but never mentioned the very core of the problem. Any person who had read such amount of literature could have easily come across with the real root cause of the stated problem.
It’s a Language Model. But does it truly understand the language ? Sometimes, AI results remind me of my college students. They copy one paragraph from one of my lectures, and then paste it together with another paragraph from another lecture, and put in a sentence or two from the reading materials. The difference is that the AI answers are well composed, grammatically correct and even erudite-sounding; but basically wrong. On the other hand, my students’ essays simply did not make any sense at all.
I saw a news feature on the 1972 Philippine basketball team to the Olympics. It was based on an old newspaper photo with a caption. Either it was AI-generated or written by a young reporter who knew nothing of Philippine basketball in the early 1970s. It was totally wrong. Based on the news clipping, AI or an uniformed reporter simply speculated or imagined what could have happened then. I wrote a comment pointing out the mistakes of the news feature. The Page editor of the news org did not bother to answer. S/he just deleted my comment afterwards.
There’s another article on the 1972 RP Team to the Olympics from One Sports, but unlike the one mentioned above, it did not do any speculation or imagination. It merely copied whatever is mentioned in wikipedia or other sports sites about results of the games the Philippine team played. And it wrote about a bit (just a bit) about the history of RP basketball in the Olympics, and mentioned the names of the players of the 1972 team. But it also did not mention the significance of the RP Basketball team in Munich which made all the basketball-loving Filipinos at that time watch the game. If I remember correctly, it was even shown on TV live via satellite.
I asked GPT if it could reduce my short story from 10,000 words to 7000 words. It said it could by doing so and so. It reduced it to 700 words ! It did not edit it down; it created a summary instead! I told GPT that it was no longer a story, It agreed that it was more an essay or historical narrative. I asked it if its essay gives any indication or even nuances of the setting – the years, the political climate, the various physical surroundings of the various places in Ilocos, Manila, Mindanao, etc.
GPT then said that I was correct and that it could add more descriptions, etc. In short, it has all theoretical info of what constitutes fiction, but it does not truly understand what those info mean.
Last year, I asked Chat GPT if it knew Astrology well. It said that it didn’t. A month or so ago, I asked GPT if it knew astrology, It said yes, it knew the zodiac signs, the planetary aspects, etc. So I asked if it could tell me the effects of some planetary configurations on Ukraine, Israel, Trump, Putin, etc.
It gave me a very informative and interesting output — until I checked the details. I found out, for example, that it gave wrong info on Ukraine’s rising sign. I pointed that out, and Chat GPT said I was correct and changed its output. And then I noticed that Putin’s Moon was wrong and other aspects of his chart. Again, GPT said I was correct and promptly changed its output. The more I checked, the more I found wrong info from GPT. I concluded that GPT does NOT know Astrology at all. It was hallucinating the whole time.
If Chat GPT were my student, I would grade some of its results a big F!
If a person doesn’t know well the subject he is asking the GPT about, say, Astrology or Quantum Mechanics or the Napoleonic wars or the Peloponnesian war, s/he could get very well-written and seemingly well-researched answers; but very wrong! And s/he wouldn’t know about it until somebody knowledgeable would tell him/her what’s wrong with it. If a student would submit such a paper; and the teacher also knows nothing about the subject, then the student could get an A. This would add further to the dumbing down of students, and society in general.
In an AI Masterclass I attended, we were asked to write something with multi-layered prompts. Even after so many iterations, I still had to tweak the final result to get what I wanted. But I must say, AI co-crafted (with me) a very nice essay.
Even their much ballyhooed moodboards leave something to be desired. With so many iterations from Chat GPT and Gemini Flash 2.0, I was still not satisfied. I had to do more photo editing to get what I wanted.
In other words, AI is not a do-it-all wonder machine. Human Intelligence is still needed to guide it, correct it, improve on it. Maybe in the future, all-knowing super intelligent AI assistants for everyone can exist, just like in sci-fi movies and novels. But that is for the future.
*********************************************
If you liked the post, please subscribe
or donate if you can through Paypal or GCash;
so I can continue maintaining this blog.
Check the sidebar =====>>
Thanks.
**********************************************





