The AI Copyright Circus Moves to Nvidia

generative AI news, generative AI, Nvidia, copyright infringement

In generative AI news, three authors, Brian Keene, Abdi Nazemian, and Stewart O’Nan, sue Nvidia for using their copyrighted works in training its NeMo AI platform.

  • They claim their works were part of a dataset of around 196,640 books used to educate NeMo.
  • The proposed class action argues Nvidia’s acknowledgment of using the dataset constitutes copyright infringement.

Three authors are suing Nvidia over alleged unauthorized use of their copyrighted works to train its NeMo AI platform.

Brian Keene, Abdi Nazemian, and Stewart O’Nan claim that their literary creations were part of a dataset comprising approximately 196,640 books utilized to educate NeMo in emulating standard written language. Included in the lawsuit are Keene’s 2008 novel “Ghost Walk,” Nazemian’s 2019 novel “Like a Love Story,” and O’Nan’s 2007 novella “Last Night at the Lobster.” It is important to note that the dataset was removed in October for reported copyright violations.

In a proposed class action filed in the San Francisco federal court on Friday night, the authors argue that Nvidia’s acknowledgment of utilizing the dataset constitutes an admission of copyright infringement. They are pursuing unspecified damages on behalf of individuals in the United States whose copyrighted works contributed to the AI model’s training over the past three years.

This legal action adds Nvidia to the list of tech companies facing copyright litigation over the use of copyrighted material in training AI models. OpenAI and its benefactor Microsoft have also been sued for similar reasons.

It’s starting to feel like a broken record when it comes to the ethicality of the situation. So, let’s be realistic for a second.

For some reason, humanity has always craved progress. It never really mattered if the progress was a wheel or a human-size robots that tells better jokes than Gabriel Iglesias. As long as humanity was moving forward in some shape, way, or form, we were satisfied. Were we apprehensive? Absolutely. Everything should come with a healthy dose of apprehension after all. But for us to achieve the level of progress that all these visioners aspire to reach we need to compromise.

A child is not born with a preset package of knowledge. Basic instincts do not count. So, why are we treating AI any different? At least, AI won’t draw on your white walls, when you turn around to answer the phone. AI needs to learn from somewhere. Scientists need to give it example of good and bad and everything in between. It only makes sense to use the best humanity has created to teach it. and considering AI is quickly becoming an integral fact of our reality, something has got to give.

Solution 1: Lawmakers should stop turning a blind eye and a deaf ear to the current climate in AI discussions. They then may modify, amend, write, or whatever it is they may do to copyright laws to have them fit with the times. They will probably anger off a lot of people and entities, especially those who feel entitled to others’ hard work.

Solution 2: Tech companies, through lengthy discussion with the concerned parties, may figure out an appropriate fee that they will pay for every work they use in training their AI platforms.

However, there is a happy middle there somewhere. A sweet spot where lawmakers amend the law to accommodate the changing times and also ensure that creative minds get their rightful due. For that they need to acknowledge that ambiguous wording is not going to work effectively here.

I’m not saying that lawmakers are dragging their feet but I’m not saying they are rushing to put an end to these copyright lawsuits and set the record straight either. We’re in limbo now.

Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.