USF Policy Lab Projects

The Policy Challenges of Artificial Intelligence in Employment and Society 

Faculty: Konrad Posch 
Students: POLS 395: Policy Lab, Spring 2026 

 

Artificial intelligence is the most-discussed technology in the last couple years, owing to the breakout success of ChatGPT and the rapid rise of competitors, investment, and the call for policy responses to mitigate harms. Unfortunately, the societal discourse around it is prone to hyperbole, exaggeration, and misconstrued expertise. To cut through the noise, the Spring 2026 cohort of Public Policy Minor students in POLS 395 split into two teams to address the two pillars of the policy challenge of AI: the effects of AI on employment and the effects of AI on everything else in society other than employment. By adapting a policy analysis approach, this project was able to avoid the pitfalls of philosophical intransigence and performative position taking by adopting a pragmatic set of questions focused on what is happening, what can we do through policy, and what should we do with policy (that could actually be enacted).

The AI and Employment Team focused on human autonomy in the age of AI leading to a set of  recommendations for implementing human-centered algorithms in the workplace. Drawing on interdisciplinary research and studies across economics, sociology, psychology, and public policy, the findings show that these harms stem from how AI systems are designed in opaque and profit-driven manners. To address this, the project proposes a shift toward human-centered algorithms—systems that prioritize transparency, accountability, and meaningful human oversight, and that position workers as active participants rather than passive subjects. This includes increasing transparency in both the data used to train models and the decision-making process they produce; incorporating worker feedback into the system’s design; and mandating “human-in-the-loop” oversight for high-stake decisions, such as wage determination and termination of employment. It also emphasizes designing AI to assist and enhance human labor rather than replace it, and adopting a “consent, credit, and compensation” model to protect the rights of creators whose work is used to train these systems. The implementation of this model would necessitate changes in intellectual property legislation regarding the way that AI algorithms are trained, including a means by which to license, attribute, and pay royalties to the creators whose work has been utilized in the training process. Otherwise, the development of AI technology will continue to depend on unpaid labor. By implementing these principles, algorithmic systems can preserve the dignity of workers, promote fairness, and support sustainable innovation and workforce stability.

The AI and Society Team was initially drawn to the climate change impacts of AI, but quickly realized that such a topic was a bit of a red herring given that datacenters are just not that big a user of electricity globally. However, the local impacts on the environment from AI datacenters proved a fruitful lens through which to analyze the impacts of AI and make recommendations, leading to a discussion of the environmental impact of data center expansion in the United States. They found that the environmental impact of AI is dictated by where and the amount of data centers built, how water rights are allocated, and the economic (and political) incentives that shape these decisions. Governance structures allow corporations to access these resources with limited restriction. Local governments often accept these tradeoffs due to competition for economic development, resulting in policies that prioritize growth over sustainability. Addressing this issue requires stronger regulatory frameworks and policies that limit water use at the source rather than relying on voluntary corporate initiatives.