I was looking today at the latest news regarding the training of the artificial intelligence models. As you have probably seen everywhere online, this is the latest trend: push the AI down the throats of everyone (“integrate the AI in the interface”, so as to be politically correct), something stupid, useless and annoying. Then, forcing everyone to more or less participate in the training of these AI models, although nobody can explain why is it so important to train them and why it is important to do this ourselves through our online interactions. I mean, why would I care to train an AI model?! It’s not my “child” and I have no responsibility to look after it and make it stronger and stronger. And I have no guarantees that I’m not raising (and feeding) a snake who’s going to harm me later!
Anyway, it appears that there is a general consensus that the AI models are “good”. Good for what? Order a pizza? And for whom? Make some rich people richer? Rediscover the wheel? Cause some jobs to be obsolete? Replace actual people in a call center with dumb AIs who can’t do anything because they can’t understand anything? Help dumb children, who aren’t able to formulate a simple sentence and do basic math, to become even dumber? Write their homework? Pass their exams in their place? Keep them company in the absence of their parents who are out of sight and don’t give a damn about their offspring?
Now, it appears that the AI models currently have a problem: they are dumb. Oh, well… I wanted to say that they are “reasoning” in a peculiar way, that is, they identify patterns and calculate probabilities before giving an answer. In other words, they don’t think deductively, logically, but rather, they constellate the many possibilities emerging from the question that is put before them, and the most likely answer (or the answer that was most frequently given and returned as being true) is delivered in the form of the output. To put it simpler, 1+1 equals 2 not because 1 and 1 make 2, but because the probability of 1+1=2 is much higher than the probability of 1+1=3 or 1 or 0 or everything else. The AI models can’t think in the same way we do; they process an enormous database (which, by the way, consumes a lot of energy in the stupidest possible way) so that this process of fitting the right answer in some sort of tree-like decisional algorithm can return decent answers and not enormities. If I think farther and in a twisted way, we’re feeding the AI models with a lot of information so that they can be increasingly sure than 1+1=2 and not something else (although there’s always a possibility that the answer can be different).
So now, the owners of the AI models came with a new idea: why not put the AI to watch movies, just like stupid parents let their kids watch TV (actually, they used to do this, because today we have smartphones and online videos/games, excuse me!). Why not let the AI watch video content (films) instead of reading books, articles, etc. Well, because the AI can generate virtual worlds – in the same way we generate maps for online games – and let the AI play and learn in those worlds where the AI can have “experiences”… “learning experiences”… I mean, it’s not enough to read about the world and approximate it (as an AI), you must live in virtual worlds and get some “virtual life experience” (I wanted to say “real life experience” but it would have been a mistake). So now, we have computers hallucinating worlds in which AI models hallucinate life situations; I bet that a lot of “virtual life experience” will come out of this… and also, a lot of “virtual insight”… Being sarcastic here…
So, wrapping up, it wasn’t enough that we had young people completely cut from real life, living online in phantasy worlds, playing attention-seeking games, growing entitled and phlegmatically recommending us to “educate ourselves”, now we have AI models training on hallucinations (‘cause the books had, at least, some real life experience… I mean, the authors were real people who lived real lives… ya know…).
Every time you imagine something, you inadvertently generate errors. You imagine the day of tomorrow… but when the day is over, you discover that it didn’t exactly go, 100%, according to the plan you had. Reality helps you to adjust and readjust… and you learn something new every day. As you get older, you become wise (well, a minority becomes wise, the majority stays the same), and this wisdom is not a large quantity of information (what the AI models appear to be so keen on), but it’s the fact that you know how this life/world functions based on your repeated observations (it’s called “life experience”). We forget most of our life (so as not to go crazy – yes, forgetfulness is a blessing) and we remember only the conclusions (and when we were stupid and it ached).
Now imagine that you don’t have the correcting function of reality at all. Contrary to the popular opinion, the cows are not violet (remember the shocking situation when kindergarten children have drawn violet cows because this is how they were pictured in Milka advertising and the kids never saw a real cow). And this is an easy example. Life doesn’t happen as in computer games and deep relationships aren’t built on dating apps and one night stands. Sorcery and astrology do not replace science and a long reading list. And a car dealer can’t do medicine by listening to online podcasts, fast-forward during a period of a couple of months.
I very much expect the new hallucinations on top of other hallucinations, served to the general public as specialist advice by an even dumber future AI model. And I politely say hello to the girl fitting nails on my street, who might become one day my life-saving surgeon… because, you know, so as to make it big in life, all you ever need is some ambition and some AI-enhanced online learning…