Artificial Intelligence (AI) is a super powerful technology. From AI-powered services like ChatGPT, to Google Gemini, and even Microsoft Copilot, AI can be super helpful. You can get summaries of webpages, assistance with making dishes, and even generating images. Looking beyond that, though, AI is quite dangerous. Over the last year, AI has been doing more harm than good, and it is scary in more than a few ways.
AI can be used to harm people emotionally
You probably have heard of the nude and sexually explicit deepfaked images spreading online of the ever-popular singer Taylor Swift. These images were all nonconsensual and were generated using a since patched exploit in the Microsoft Designer AI app. It even caused X, the platform formerly known as Twitter, to disable searches for the phrase “Taylor Swift” entirely at one point. Swift’s lawyers also stepped in, and threatened action against the perpetrator who created the images. Eventually, things were hushed and people stopped talking about it.
All of this happened to someone famous, who has access to lawyers, and the means to stop the madness, but what if this were you and me? How can you stop someone from using public photos, and AI tools, and other digital representations of yourself to damage your image? While it might be beyond your control, the White House urged for action, and lawmakers are pushing legislation, and changes at the FTC on the matter. We need to act now before it’s too late.
AI can cost people jobs
AI services pull information that’s publicly available on the web. As a writer myself, that’s what I find haunting. You can ask an AI assistant to craft you something, and it’d pull information that it finds to put a poem, or longer prose together. That’s why students use AI services to cheat on exams, for example. And, why many checks have since been put in place to prevent academic cheating. A simple search on Google reveals the GPTZero tool, for example which checks for AI plagiarism.
But beyond that, AI also threatens writing, the newsroom and journalism itself. CNET, as another case, used an AI engine to generate financial content. Checks later shown most of it turned out wrong, which is worrying since journalism is supposed to be ethical. AI was also the center of the Hollywood screen writer strike, with writers worrying about what AI content could essentially replace them.
As writers, we feed the search engines that power AI, but there could be a day when the AI ends up feeding us, instead and could put us out of work.
AI can damage a country and cost elections
Finally, there’s the political side of things. We’re at the start of an election season, and a few years removed from the last one, where misinformation spreading on social media was a cause of concern. It’s why I worry about what deepfakes can do for our nation’s 2024 elections. AI-generated audio of Joe Biden has already been used to mislead voters during the New Hampshire primary. What if tactics like this continue? Sure, tech giants have come together on the matter and have promised that checks are in place, but it seems like things are only going for the worse. Every time action is taken, something else comes in the way.
AI isn’t going away and it’s only getting worse
That something else comes from Open AI. The group just came up with a new tool Sora, which it claims can generate videos based on text. It promised that only academics and researchers will have access at first, but it seems troubling. Such a tool goes back to all my worries. It threatens creative professionals like video editors and artists. Why would their skill be needed if a chatbot can do it? It even goes back to my other two points about elections and personal harm, too. What if this content spreads on social media? I sure don’t hope that we end up going that way and help make AI more responsible.
The featured image in this post was generated using Microsoft Copilot