Big Tech and AI: A Match Made in the Matrix

Big Tech and AI

In light of Microsoft’s recent expensive investment in OpenAI after their generative artificial intelligence (AI) ChatGPT went viral, everyone is wondering how the game has changed since Big Tech giants set the rules for the rest of us. OpenAI is a startup company specializing in artificial intelligence research to promote and develop friendly AI to benefit humanity as a whole. After the commercial success of Dall-E 2 and ChatGPT, Big Tech companies are heavily considering investing in the next “big thing.”

ChatGPT 101

Unlike other AI chatbots out there, ChatGPT can hold a conversation in a very human-like manner, answering serious and silly questions about profound topics. In fact, it can answer follow-up questions, admit mistakes, challenge incorrect arguments, and shut down inappropriate requests.

Its tasks center around natural language processing (NLP) and include text generation, language translation, text summarization, and sentiment analysis. The population went crazy over the tech as it has been trained on a massive corpus of text data resulting in highly accurate and fluent answers. Furthermore, the human-like responses are extremely prompt-relevant, showing an unprecedented degree of knowledge and understanding for an AI. The use cases for ChatGPT are extensive in terms of contexts and types of NLP tasks. The former concerns the AI’s capability of generating an appropriate response to a topic in Forum A, for example, and another in B.  The latter revolves around the different help the AI can provide, from coding to poems.

Why Invest in ChatGPT?

Big Tech’s hype around this specific chatbot is related to its potential when integrated into the workflows for better, quicker, and even cheaper labor. In an environment driven by value-add and personality, the monotonous part of the workload won’t be missed if delegated to the AI.

Transforming the Work

AI technology is not sentient and capable of free thinking and autonomous actions. Therefore, a human “handler” is necessary. The chatbot is absolutely a work aid rather than a “coworker.” Companies are interested in its application to augment productivity and reduce/remove the monotonous parts of an employee’s task list, among which there are:

  • Compiling research;
  • Drafting marketing content;
  • Brainstorming ideas;
  • Writing computer code;
  • Automating aspects of the sales process;
  • Translating a text from one language to another.

All in all, the consensus is that ChatGPT can take over trivial monotonous tasks freeing up the employee for more complex work.

But Why Doesn’t Big Tech Build Its Own?

The answer will surprise you because it has nothing to do with the labor going into such a development and research project and everything to do with the risk of testing it. Simply put, ChatGPT is still in bets and undergoing public testing, so the results are… questionable (for lack of a better word). Users reported harmful content, i.e., misinformation, hate speech, and biased images against women and people of color. These results have generated much backlash, understandably so. Consequently, a Big Tech company cannot risk its established corporate brand for the sake of an innovative bot when it can simply invest and reap the benefits.

Citing Sources

When researching, the user looks at the source cited and utilizes the information based on its credibility. Compared to standard search engines, ChatGPT does not have that feature. Consequently, researching for a term paper could be complicated at best and detrimental to the work’s credibility at worst.

Misleading Information

How many times have you resorted to checking your symptoms online before your doctor’s appointment to prepare yourself? Yeah, me too. But with ChatGPT, the information is potentially misleading, especially considering it does not have a percentage of inaccuracy. There are thousands of results for every search request. So, you can imagine the number of false and misleading answers available.

Out-there Responses

Cyberspace is filled with data coming from around four billion users. Users have their own principles and morals that they express daily through likes, shares, and comments. The developers trained the AI technology on massive amounts of data about code and information from the internet, including less than credible sources (e.g., Reddit). Such sites are notorious for having highly toxic users among their demographics. As a result, the engine will sometimes give inappropriate answers and downright insulting ones at worst.

The most prominent bone to pick with artificial intelligence, in general, is the copyright issues. Who owns the final creation? The company? The user? The original creator on which the answer is based? In fact, a class-action lawsuit named OpenAI and Microsoft alleging that Copilot, their AI-code-generating software, violates US copyright laws. Many companies are awaiting the final verdict as this moment is a defining one in AI history.

Wrap-up

All this is to say that Big Tech companies’ investment in ChatGPT is solely based on the risk it poses to their reputation. These issues open them up for legal reliability. However, investing in smaller and newer startups would protect them from that while allowing them to contribute to innovation and technological advancements. This strategy also offers tremendous benefits for the startups themselves as they get the capital they need for their research and consequent development.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.