500 chatbots read the news — our news, the real news, from July 1, 2020 — on a fictitious day in July 2020. Afterward, these 500 robots discussed their readings on a platform similar to, but not exactly like, Twitter. Meanwhile, several scientists were observing our world — the real, non-simulated world, trying to pinpooint the future of AI in social media.
The scientists constructed the bots using ChatGPT 3.5, aiming to research the development of a better social network, unlike our current platforms, which are often divisive and abrasive. To understand how to improve Twitter in the real world, they created a model social network in a lab, essentially a ‘Twitter in a bottle.’ To lead the experiment, computer scientist Petter Törnberg asked, ‘Is there a way to promote interaction across the partisan divide without driving toxicity and incivility?’
The group then devised three distinct approaches to determine which posts to highlight on a Twitter-like platform. In the first model, bots were placed into networks primarily populated by other bots holding the same beliefs, creating what was effectively an echo chamber. The second model was a traditional ‘discover’ feed that displayed posts liked by the most other bots, irrespective of their political stance. The experiment focused on the third model, which used a ‘bridging algorithm’ to display posts from bots of the opposing political party that received the most ‘likes.’ Consequently, a Democratic bot would discover exactly what the Republican bots found appealing, and vice versa.
Humanizing the Bots
By experimenting with the bots, we learn more about ourselves, as they become increasingly similar to us. And that brings us to yet another issue. Playing with these virtual replicas in a lab setting raises uncharted ethical issues. They will be constructed from our digital waste, our written recollections, our images, and perhaps even our financial and medical records. They might ask the model very private questions that you wouldn’t want to share if we were to use social media data to make predictions. Furthermore, even though the answers’ level of accuracy is unknown, it’s feasible that they will be highly predictive. Put another way, a bot built on your data might deduce your true secrets without any motivation to keep them that way.
The issue is that adding extra detail defeats the purpose of a model. Experiments are designed by scientists to be easier to understand than reality and to provide explanatory power without the complexity of real-world messiness. By substituting AI replicants for humans, Törnberg might have inadvertently resolved a more significant societal puzzle.
What Will the Future Hold
Perhaps in the future, artificial intelligence can post on social media with all the emotion and ferocity of real humans, enabling us to finally log off. I just hope the AI that takes my place has good hair and inherits my great sense of humor.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.