The Ethics of Using GenAI Tools

The Ethics of Using Generative Artificial Intelligence Tools

 

Deep Fakes

Watch the video below

Would you want this guy as your professor?

As you probably guessed, this is a complete fake. It fooled several fellow staff members before I let them in on it. This video was made by veed.io, an AI video generation tool.

Deep fakes are images, video, or audio that mimics a real person or situation so well they can easily be mistaken for the real thing.

You can no longer trust video, audio, or images that depict reality. Make sure you verify through reliable sources, such valid news sources or academic resources. Question everything that sounds sensational until it is verified by multiple sources.

 

Read More

Leon Furze is a PhD student in Australia focusing on the implications of AI in writing instruction and educaiton. He crafted a series of blog artcles on ethics of AI use which formed the basis for the video and thoughts about ethical use of AI. You are encouraged to visit his blog site and review the more detailed writing:

Image Note

The image in the header above was generated using the Generative AI Tool StableDiffusion. The prompt I entered was "Three college students discussing an ethical dilemma." Notice how this tool showed multiple ethical problems. First, it assumed the students were white. In Fall, 2022, only 42% of college students in the U.S. were white (National Student Clearinghouse: Research Center), yet it assumed all of them were. We could also question the bias of the AI tool on the age of college students, although that is not definitive in the image.

The second issue is accuracy. I asked for three college students. The app appears to have drawn a pair of female twins and two white males. This is clearly not "three" individuals and represents an example of an AI image generating tool not delivering on accuracy.

Video Transcript

Generative artificial intelligence tools are not without problems. They are not perfect. There are many ethical concerns.

Briefly, we’ll take a look at each category based on the writing of scholar Leon Furze

Bias

AI tools absorb huge amounts of data from the internet. This data includes incorrect, hateful and biased information. The data comes from humans, after all and we are sometimes hateful, biased, and absolutely wrong.

There are many examples we know about AI tools and algorithms being biased. The State of Florida and other states use an AI system to evaluate criminals at sentencing. The system uses many factors to calculate how likely a suspect will commit further crimes. According to ProPublica, the system is biased against African Americans.

While the system is “fair” in the sense it does not use race directly, it is biased because race has played a part in the historical data used by the AI algorithm to calculate a risk score.

Worse, AIs may have biases that are unknown and not obvious.

Truth

An even bigger issue with text generators like ChatGPT is accuracy or truth. All types of these tools do something called hallucinating. This means they make stuff up. Remember, the generative AI is mimicking human communication…it is not really reasoning.

There have been some pretty famous hallucinations so far. Lawyers were fined $5000 for submitting papers to the court that reference made up court cases. In Georgia, ChatGPT is being sued because the tool falsely indicated a man was stealing money from a group he belonged to.

Even AI tools that generate images make weird mistakes. Look at the hands of AI-generated humans…they often have six or more fingers.

At TU, you are on the hook for being correct. If you use AI, you must make certain it is accurate. Learn to verify what a tool says through other means, such as references for verifiable sources like the library or your course textbook.

When you read their responses, they sound confident and convincing. Be very aware thought that these tools are mimicking the human writing they’ve seen.

 

Privacy

Whatever you input into a GenAI tool may find its way into the data stream. While tools are beginning to provide some safeguards on your private data, they have a long way to go.

Remember to never provide sensitive information you don’t want the whole world knowing.

Datafication

While we’re talking about data, you should consider the datafication of the world. Everything is becoming data these days. Tech companies are able to leverage this data to develop wealth and power.

Copyright

Copyright and who owns the source material is a major issue with AI. Remember those tools developed their database by scraping information from the internet. The creators of this information were not consulted prior to the scraping.

Image, sound, and text generating applications are being sued by artists and writers because they did not give permission to the AI companies to use their work. It’s unclear how courts will rule on these issues or how companies like OpenAI could have their tools “unlearn” certain information.

Another issue is that AI-generated works currently cannot be copywritten because they were not created by humans.

Environment

Another ethical question about AI is the massive energy consumption required to train and operate these data centers. Tools like ChatGPT rely on giant banks of computers in the cloud requiring significant energy to operate.

Human Labor

Similar to environmental concerns, todays AI models were not created exclusively by powerful computers but by human laborers involved in training the tools. In the case of ChatGPT, workers in Africa were paid $2 an hour.

For the current generational of tools, there will always be a large, human component to the creation of an AI.

Power

Fuze talks about Power as the most complex ethical question surrounding AI. How much power are we ceding to AI powered tools? How much wealth and power are we giving to the companies that provide AI tools which they built by summarizing the human experience online that was made by all of us? Do AI tools continue to propel certain people with resources even further ahead of others?

It’s difficult for us to answer those questions right now, but we should be thinking about them.

Dark Web AI

One last element of concern with AI are dark uses of the tools. Now fake emails designed to get our passwords will look even more realistic. Cyber criminals will be able to more rapidly duplicate their schemes. When Uncle Hal’s Facebook account gets hacked, a much more realistic chat version of him will try to convince you to go buy some gift cards to get him out of jail.

Like any tool, AI can be used for good or evil. You should consider these questions as you look to navigate AI in your studies and professional career.

In our final video, we want you to think about how you can and should utilize generative AI tools in your classes.