ChatGPT and the dangers of data harvesting
There has been quite a buzz lately about ChatGPT, the new artificial intelligence chatbot developed by OpenAI. So many appear to be enjoying themselves interacting with this large language model program. There are stories of how it can create stunning visual art, write a respectably good poetry and even make a general internet search more productive.
Some are even getting a kick out of trying to make ChatGPT break out of its given parameters. Reddit is full of snickering stories of the DAN prompt (Do Anything Now) which attempts to get ChatGPT to create for itself an alter ego that is able to do things ChatGPT is not allowed to do.
Others are concerned that the data it has been exposed to has been very carefully curated for its learning and is therefore biased, and has shown its bias. It was reported that ChatGPT was willing to write a poem praising Hunter Biden but was unwilling to do the same for Rep. Marjorie Taylor Greene.
There are also those who foresee an issue with academics. ChatGPT can purportedly write an essay that meets class assignment parameters that is indistinguishable from human writing. There is no copyright and no way a teacher can check for plagiarism. When given the same prompt multiple times for such a product, the results that are given are artfully varied.
We, as a society, have been using artificial intelligence for years. These systems are more accurate than humans in certain fields like reading X-rays or MRIs, and our medical community has put it to good use. Even the list of suggested words on your smartphone above the tiny keyboard is a kind of AI that is attempting to anticipate the next word you will type.
Just think of how much easier and more productive this product can make our lives. But some are concerned that it will negatively impact our human jobs market. In fact, it is already doing so. Those interested in studying graphic design as a career, for instance, are being given advice to rethink that choice. You see, ChatGPT can largely perform such tasks with a simple prompt.
The amount of time spent on our smartphones gaming or searching the internet is bad enough. Don’t even get me started on the time eater called social media! Our educational systems even demand our students use Google Classroom or the like in order to participate in necessary coursework. And now we have been given a new tool to play with.
The problems mentioned above should give us pause about this new technology. Knowing that it’s all meant to gather information from you to sell as packaged data for marketing purposes is yet another issue. But is that all it’s meant to do with that data it’s gathering from your phone, your computer, your Fitbit, even the Internet of Things in your smart home? What if there is more to it than just marketing?
Artificial intelligence computing gains knowledge through exposure to data. Just consider how many hours a day your family feeds a plethora of databases with information through your search preferences, your music choices, your reading material. Firsthand communication opportunities for it to garner data from users is off the chart right out of the chute: ChatGPT reached 100 million users within the first two months it was available to the public. It took nine months for Tik Tok and two years for Instagram to reach that milestone. It is learning with every interaction.
There are purportedly some devices or programs that even watch through your device camera and evaluate your eye movements as you read an article such as this one. It notes your pupil dilation and rate of motion to determine your likes and dislikes, mood and health.
The information amassed on Google Classroom alone is enough for an AI-like ChatGPT to infer psychological and sociological conclusions upon our up-and-coming generations, conclusions we might prefer be left private. No doubt, it could be trained to extrapolate IQ as well, if it hasn’t been already.
Klaus Schwab, the chairman of the World Economic Forum, spoke recently in Dubai at the World Government Summit. Speaking about artificial intelligence and other “Fourth Industrial Revolution” technologies, he tellingly stated, “Our life in 10 years from now will be completely different, very much affected, and who masters these technologies, in some way, will be the masters of the world.”
Data breaches, hacking and ransom-ware attacks aside, this information could never be used to control or manipulate us, right? The book “Pre-suasion” by Robert Cialdini gives some pretty eerie (and sometimes morally questionable) tactics currently in use within the marketing industry right now. Should we be more concerned with the amount of data being harvested from ourselves and our families?
Maybe a good dose of digital veganism and a very good VPN is in order! I also encourage you to check out Global Walkout and implement their suggested actions as well; there are already 18 action items posted, and more to come. At the very least, be aware that our technological advancement levels are reaching a point where concentration of power has become dangerous.
In the Bible, Christ Himself warns us: “For there shall arise false christs, and false prophets, and shall show great signs and wonders; insomuch that, if it were possible, they shall deceive the very elect” (Matthew 24:24, KJV).
I don’t think there’s ever been a time more fraught with false information, manipulation, intrigue and outright lies than now. False christs and false prophets abound in the form of technologies (and the leaders who wish to harness their power) that promise us ease and comfort in exchange for just a little data. Don’t wait until there’s no chance to counter this agenda. Act now!
Content created by the WND News Center is available for re-publication without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact firstname.lastname@example.org.