AI and the Alignment Challenge Podcast Por  arte de portada

AI and the Alignment Challenge

AI and the Alignment Challenge

Escúchala gratis

Ver detalles del espectáculo

Acerca de esta escucha

We dive deep into the intricacies and ethical considerations of AI development, specifically focusing on OpenAI's Chat-GPT and GPT-4. Join us as we discuss how OpenAI approached the alignment problem, the impact of Human Aligned Reinforcement Learning, and the role of human raters in shaping Chat-GPT. We'll also revisit past AI mishaps like Microsoft's Tay and explore their influence on current AI models. The episode delves into OpenAI's efforts to address ethical concerns, the debate over universal human values in AI, and the diverse perspectives of users, developers, and society on AI technology. Lastly, we tackle the critical issue of employing workers from the global south for AI alignment, examining the ethical implications and the need for support. Tune in to uncover the complexities and breakthroughs in the evolving world of AI!

Dr. Joel Esposito. He is a Professor in the Robotics and Control Engineering Department at the Naval Academy. He teaches courses in Robotics, Unmanned Vehicles, Artificial Intelligence and Data Science. He is the recipient of the Naval Academy's Rauoff Award for Excellence in Engineering Education, and the 2015 Class of 1951 Faculty Research Excellence Award. He received both a Master of Science, and a Ph.D. from the University of Pennsylvania.

adbl_web_global_use_to_activate_T1_webcro805_stickypopup
Todavía no hay opiniones