Overview
AI and automated decision making systems are becoming more prominent in our lives and lack oversight, transparency or accountability. Governments, agencies, and private companies use them in relation to predictive policing, school admissions, health decisions, welfare or benefit eligibility, immigration, criminal risk, housing, recruitment and credit scoring. This six-week program covers the implications of bias in data science & AI systems and the importance of ethical decisions in developing and implementing these algorithms. Throughout this program, learners will gain a strong understanding of data privacy, the effect of bias, and disinformation.
Highlights
Who Should Take
Beginners: Any student, advocate, or enthusiast interested in the intersection of social justice & technology
Professionals: Practitioners interested in best practices on how to design, use, or question the automated decision making systems
Overall Learning Outcomes
Gain a conceptual understanding of the implications of bias in AI systems and topics within AI ethics discussions
Understand the importance of diversity and ethical decisions on the individual and social justice
Formulate a business problem as a hypothesis question, use methodologies in the execution of the analytics cycle, and communicate results translating insight into business value.
Have an elevated discussion on how to design, use, or question the AI-powered systems within an ethical framework
Apply these concepts to case studies of biased systems
Curriculum Modules
- Week 1: Overview
- Week 2: Data Privacy and Power
- Week 3: Explainability, Accountability, and Trust
- Week 4: What kind of harm can biased systems cause? When can you introduce bias into an AI system? What are some of the best practices to avoid it?
- Week 5: AI Literate Citizen
- Week 6: Disinformation