AI has ethical, moral issues that should be addressed

managing editor

At some point, we as a society need to acknowledge how weird it is that we can just fabricate fake things about real things with the help of technology.

How do we even explain watching current President Joe Biden and former Presidents Barack Obama and Donald Trump absolutely tearing into each other over Minecraft? 

Thanks to voice cloning technology, that entire sentence is possible. 

In theaters, people could see Carrie Fisher’s young face completely fabricated in CGI on a stunt double for Star Wars’ “Rogue One” spin-off. 

Even more, a series of images started to circle social media recently of Trump getting arrested in what looks like a renaissance era painting of him unsuccessfully fleeing from officials.

That was also courtesy of AI. 

While it’s unlikely society is going to have a Terminator turn of events with this odd advancement, who’s to say how easy or hard it is to make the jump between entertainment and damaging actions? 

As funny as it is to watch the presidents fight, that’s a near identical vocal copy to real people with a lot of influence and listeners. 

It is worrisome that if someone on TikTok can make these videos, slap it on a Minecraft parkour split screen clip and send it off, anyone can. 

Anyone can make whatever message they want and it’s up to the viewer to have the discernment to decide if what the government official said online was really them or a fabricated message. And it could worsen if instead of minecraft it was somewhere more realistic.

It would be a different case if there were clear indicators that something was AI generated. But the voice clones are almost too good. The deep fakes are almost too realistic. And the essays are believable, the art is complex and it’s telling Washington Post reporters it “can feel or think things.”

But right now AI is only almost something else. It sits on the edge between reality and artificially generated actions. 

Most of this is the product of humans, which is something both comforting as AI isn’t developing a mind of its own with sinister intentions and important to acknowledge because the person behind the AI actions are to blame if something goes wrong. 

Right now it seems that it’s all up in the air, the consequences aren’t quite there if we never know who did it. 

At the end of the day, responsibly using AI is the most reasonable solution that most people want. No one wants a Terminator scenario.