March 2024 Security Topic - Generative A.I. and Data Privacy

Generative A.I. is a complex topic. Conversational quality A.I. has the potential to increase support for students with disabilities, multilingual learners, and all students who can benefit from greater personalization and adaptability in digital learning tools. It can also be used to automate and improve various administrative tasks across the university (Cardona et al., 2023).

Generative A.I. can also cause harm. Without careful implementation it can automate discrimination, and perpetuate bias (Chopra et al., n.d.). Misuse of this technology can result in the undesired disclosure of privacy information and violate the rights of consumers (Kang & Metz, 2023). And to be honest the list of concerns and implications can easily fill several more articles and papers.

Despite the risks of this technology, we must continue to explore its potential. Higher Education cannot ignore current and evolving technologies because they are being used in society and impact the lives of so many people. These technologies will shape the future of learning and working (Almas et al., 2023). We can consider the adoption of the automobile as a useful parallel.

Cars created before 1889 did not have headlights, speedometers, safety glass, turn signals, seat belts, or airbags. They were dangerous both to the drivers, who frequently drove recklessly, and to the pedestrians who weren’t used to avoiding an unexpected object moving so fast. In 1865 the British Parliament passed a law to limit the use of cars to protect public safety (Keil, 2021).

And yet today I don’t think anyone currently alive can imagine a world without cars. It simply took time for them to become safer as societies better understood the risks involved and began to require those protections. Right now, generative A.I. is a car without headlights, and we don’t really understand all the items needed to make it safe. Improving safety will take more time as the technology develops and we gain a better understanding of it.

This time of risk and danger doesn’t mean that we shouldn’t drive the car. It just means that we should do so with greater care and not be reckless. We must drive slower and be more aware of the needs and concerns of other drivers and pedestrians.

This awareness is especially important regarding data privacy. Generative A.I. services can consume and disclose data in unexpected ways. We have an obligation to control access to and disclosure of data protected by State, Federal, and industry regulations. That means we must refrain from submitting such information to A.I. services.

When we want to process privacy information, we need to select and utilize A.I. services that offer a contractual commitment to the university addressing the protection of privacy information. Although the widespread adoption of generative A.I. is new, this requirement is not. Conversational A.I. adds further complexity here because we may also need to consider generated conversations as protected by law. Not just what the student says to the A.I. model but the A.I. model’s reply to that student (Cardona et al., 2023).

It will take time for generative A.I. to get the protections that it needs to be considered safe. In the meantime, we should use and explore. But we need to maintain awareness of the information we are disclosing and who we are disclosing it to. We are still required to protect the privacy of our community members.

 

References

Almas, B., Gavin, D., Francini, B., & Riman, J. (2023). FACT^2 Guide to Optimizing AI in Higher Education. Faculty Advisory Council on Teaching and Technology. https://fact2.suny.edu/wp-content/uploads/GuideAITaskGroup-FACT2-101223.pdf

Cardona, M. A., Rodríguez, R. J., & Ishmael, K. (2023). Artificial Intelligence and the Future of Teaching and Learning. Office of Educational Technology. https://tech.ed.gov/files/2023/05/ai-future-of-teaching-and-learning-report.pdf

Chopra, R., Clarke, K., Burrows, C. A., & Khan, L. M. (n.d.). Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. Retrieved March 25, 2024, from https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf#:~:text=The%20Consumer%20Financial%20Protection%20Bureau%2C%20the%20Department%20of,consumer%20protection%2C%20and%20other%20vitally%20important%20legal%20protections.

Kang, C., & Metz, C. (2023, July 13). F.T.C. Is Investigating ChatGPT Maker. NYTimes.com. https://www.nytimes.com/2023/07/13/technology/chatgpt-investigation-ftc-openai.html

Keil, C. (2021, September 20). New technology has always been scary. Medium.com. https://medium.com/pronouncedkyle/new-technology-is-always-scary-8bf977a13773

100% helpful - 1 review

Details

Article ID: 150216
Created
Mon 3/25/24 5:14 PM
Modified
Fri 3/29/24 9:34 AM