Ethics of Technology, AI, and Neuroethics 2024/2025
General info
If you’re feeling overwhelmed by your studies and facing challenges beyond your resources and abilities, you can reach out to the psychological consultant for study-related difficulties .
If you’re going through a difficult time or need support for any reason, don’t hesitate or feel ashamed to seek psychological help . Additionally, our faculty members offer free psychological support in the form of phone consultation for students.
If you’re facing technological or health-related difficulties that prevent or significantly hinder your studies, please contact me so we can work out a plan for your participation..
This elective course is dedicated to develop your ability to analyze and address ethical challenges related to neuroscience, AI, and broader technological advancements.
(The information on this page pertains to the Spring 2024/2025 edition).
Key details:
- We meet on Thursdays at 9:45AM.
- Each session explores a different ethical challenge related to AI, neuroethics, and technology.
- Office hours, i.e., time: Tuesdays 11–13 and Fridays 11:30–13:00 (Room 65AB). Email: michal.wyrwa [at] amu.edu.pl.
- Discussions on morally sensitive topics require a safe environment—let’s work together to create an inclusive and constructive space.
- Prior experience with philosophy, neuroscience, or AI is not required (but always nice to have:)). The focus is on practical ethical reasoning, not technical expertise or history of philosophy.
- Readings and handouts are available here.
- Note that readings are available for each session. They are listed in the discussion questions. Some of them are marked with a ‘★’—these are particularly important for your future self preparing for the exam. By the end of March, you will receive a detailed list of exam questions and key topics.
- The readings are there for your reference: ★-marked ones are required for the exam, but you don’t have to read them for our meetings. Of course, you’re welcome to, but for discussion purposes, it’s just as valuable to research the topic online and engage with different perspectives.
Course Calendar
(27.02) Let’s meet (introduction & course organization)
Foundations of neuroethics, AI ethics, and technology ethics—i.e., what is applied ethics? Overview of the course structure.
(6.03) Neuroscience & responsibility
The role of neuroscience and neuroprediction in criminal justice. To do:
- Watch the two-part PBS documentary Brains on Trial with Alan Alda (p. 1 and p. 2).
- Find a recent real-world case where neuroevidence was used in court.
- Review discussion questions – onedrive.
(13.03) Neuroenhancements
Ethical and practical considerations of using neuroehancing substances and technologies. To do:
- Find two real-life examples of neuroenhancement: one with a positive and one with a negative effect (according to you).
- Review discussion questions – onedrive.
(20.03) AI Autonomy & responsibility
When is AI making a decision vs. just executing a process? Responsibility vs. accountability of AI, developers, policymakers, and users. Who should decide what AI can decide? To do:
- Join one of three groups (sign up here):
- AI Companies’ Innocence – “Developers should not be liable for AI decisions. The technology is neutral.”
- Governmental Regulations – “Governments must take full responsibility for AI’s effects.”
- User Responsibility – “The users of AI (companies, consumers) should be responsible for what AI does.”
- Prepare structured arguments on the debate points – onedrive – and create a slide summarizing key claims and supporting data.
(27.03) Algorithmic bias & fairness in AI
Bias in hiring, policing, facial recognition, and predictive analytics; ethical mitigation strategies—should we bias AI toward fairness? To do:
- Read about Gender Shades – www.
- Find another recent example of AI bias (excluding generative AI—covered later).
- Review discussion questions – onedrive.
(3.04) Privacy, surveillance, & neuro-rights
Privacy issues in AI; surveillance capitalism, technofeudalism; data privacy regulations. To do:
- Review discussion questions – onedrive.
(24.04) Generative AI
AI hallucinations, deepfakes, media manipulation, AI-generated propaganda, LLMs’ biases, who is accountable for AI-generated content? The ethics of creating vs. using generative AI. To do:
- Review discussion questions – onedrive.
(8.05) Technology & the future of work
Will AI and other technologies replace human workers? Which professions are at risk? What benchmarks should we use to assess the impact? The ethics of algorithmic management and its consequences. Is technology a workforce equalizer or divider? To do:
- You will join one of two groups and prepare a structured response to one of the questions.
- Is AI really replacing workers, or is this just another tech bubble?
- Is technology making the workplace fairer or more exploitative?
- Consider creating a slide summarizing key claims and supporting data.
- More details are available here – onedrive.
(15.05) Relationships with artificial agents
Can people form meaningful relationships with AI? Is it ethical for AI to mimic human emotions? Should AI caregivers & therapists replace human emotional support? Could AI ever “deserve” rights? To do:
- Find one real-world example of controversial human-AI relationships (e.g., AI companions, therapists, or virtual influencers).
- Review discussion questions – onedrive.
(22.05) Technology & climate change
The computational demands of computer technology vs. green, sustainable initiatives; real impact of technology on the environment vs. Silicon Valley optimism. To do:
- Review discussion questions – onedrive.
(23.05) Drop-in consultations regarding your Ethical Impact Assessment
(29.05) EIA presentations
(5.06) Final exam
(12.06) Makeup exam (if needed)
Course Format & Assessment
The course emphasizes interactive learning, balancing discussions, case analyses, and project-based work.
Most sessions will comprise of discussions based on pre-assigned questions, ending with a short lecture contextualizing key ethical issues. Some sessions will feature pre-assigned group debates, where you will be asked to take a structured position on controversial topics. Other times, we will focus on case studies, exploring data, or investigating certain issues hands-on (e.g., algorithmic biases).
Assessment structure:
- Ethical Impact Assessment (13pt)
- Critical evaluation of a specific technology or a trend (e.g., a generative AI tool, neurotechnology, neuroenhancements, a social media platform’s recommendation algorithms, predictive healthcare tools, etc.).
- The project follows a structured auditing methodology, which we will discuss later during the term.
- There is no written report required, but you will present your findings near the end of the semester.
- This template includes detailed explanations on what to include and how to perform your assessment: template
- Final Exam (23pt)
- Closed questions + a case analysis on ethical challenges in technology and neuroethics.
- Designed to assess your ability to apply ethical thinking and analyze real-world neuro- and technological dillemas rather than pure memorization. But then, to properly think and analyze one needs some knowledge, right?
- Being prepared for class (20%, 9pt)
- Active participation in discussions, debates, and group work.
- For most meetings, a list of discussion questions will be provided, and you are expected to prepare responses.
In total, you can get 45 pkt:
- from 93% – 5.0 (≥41pt)
- from 85% to 92% – 4.5 (≥38pt)
- from 76% to 84% – 4.0 (≥34pt)
- from 68% to 75% – 3.5 (≥30pt)
- from 60% to 67% – 3.0 (≥27pt)
- less than 60% – 2.0
If you don’t want to complete an Ethical Impact Assessment, you can instead prepare a ~20-minute presentation segment for one of the following sessions (individually or in pairs). The presentation should cover the current state of the debate on a given ethical topic and the latest relevant data.
- Privacy, surveillance, & neuro-rights
- Generative AI
- Relationships with artificial agents
- Technology & climate change