A new look on AI, what it means for a working society

HOPE SMITH
editor-in-chief
hope.smith393@my.tccd.edu

In the early stages of AI, it’s important to keep a level head about the situation as it pertains to the workforce. It is serious and should not be taken lightly, but the end days are not here yet.  

It’s a topic we have all talked about before, in fact I have written an opinion previously titled “AI has moral and ethical issues that need to be addressed.” I just can’t seem to get enough of it.  

But, this opinion will not denote the claims made before. It only means to come at it from another angle.  

To start, let’s address how AI could replace humans in the workforce, in the many ways it could happen.  

From the looks of it, that includes every job that doesn’t require a human presence. And, as automation may have it, jobs are constantly opening up to the possibility of adopting AI. For example, fully automated McDonalds.  

The risks are clear from the content panic which ensued after details were revealed about AI art generated by combing through and copying other artists’ work posted on the internet. The writer’s strike was not enthralled to find out that AI was also being used to scriptwrite, either.  

Now, people aren’t just scared, but mad. Nobody wants to be replaced by code. But, the current reality is that AI is still a baby learning to walk. Society is getting a taste of the capabilities.  

There are people right now that are angry and worried enough that efforts are being made to regulate AI.  

These concerns especially arise from commerce and services.  

Introducing: The World Trade Organization’s “Trade-Related Aspects of Intellectual Property Rights.” It does not include AI. People think it should. 

Basically, TRIPS protects intellectual property, and because everyone sort of shrugs when asking AI, “Who does this artwork belong to?” It leaves that idea up to the individual country. 

Well, in 2021, an appeal to the United States District Court for the Eastern District of Virginia made by developer of the AI software “DABUS,” Stephen Thaler. The case is called Virginia Eastern District Court Decision Thaler v. Hirshfeld.  

Thaler requested that DABUS become a patented inventor for any generated inventions.  

It was denied, as the court declared that in the Patent Act, an “inventor” by definition must be human.  

So the conversation has already started in limiting AI’s control. There are no national policies currently on AI, but that doesn’t mean no one is working towards it.  

An episode from the New Yorker Radio Hour, “Should We, and Can We, Put the Brakes on Artificial Intelligence?” divulges into the topic with Sam Altman, CEO of OpenAI, and one of the leading experts in AI, Yoshua Bengio.  

Yoshua Bengio, when asked in the episode if AI writing its own script is a possibility currently said no, but there’s no way to write that possibility off in the future. He cautioned that because AI is just now making advancements, it is now that precautions are taken. 

For that reason, he signed an open letter pausing giant AI experiments for at least six months. The open letter asks that production be halted and AI developers take a step back to discuss proper safety features and see that AI does not move beyond the control of humans. Also, they work with policymakers in governing AI law to make a more secure spot for AI to be, no holes for it to fall through. 

As a final note, Sam Altman explained that while AI was advancing to transition into the workforce, it wasn’t the end for humanity being able to work, because he believed people would advance to overcome what entry-level work AI would take over, as we’ve seen over the years.  

Hope is not lost. AI is unpredictable right now, but if people continue on the course of pushing for legislation and regulation, there’s no reason we can’t advance with AI as a tool.