My friend Christian Sandvig, who directs the Center for Ethics, Society, and Computing at the University of Michigan, started an interesting thread on Twitter yesterday. It began:
“I’m super suspicious of the “rush to postdocs” in academic #AI ethics/fairness. Where the heck are all of these people with real technical chops who are also deeply knowledgeable about ethics/fairness going to come from… since we don’t train people that way in the first place.”
Christian goes on to point out that it’s exceedingly rare for someone with PhD-level experience in machine learning to have a strong background in critical theory, intersectionality, gender studies and ethics. We’re likely to see a string of CS PhDs lost in humanities departments and well-meaning humanities scholars writing about tech issues they don’t fully understand.
I’m lucky to have students doing cutting-edge work on machine learning and ethics in my lab. But I’m also aware of just how unique individuals like Joy Buolamwini and Chelsea Barabas are. And realizing I mostly agree with Christian, I also think it’s worth asking how we start training people who can think rigorously and creatively about technology and ethics.
It’s certainly a good time to have this conversation. There’s debates about whether AI could ever make fair decisions given the need to extrapolate from data in an unfair world, whether we can avoid encoding racial and gender biases into automated systems, and whether AI systems will damage the concept of meaningful work. In my area of focus, there are complex and worthwhile conversations taking place about whether social media is leading towards extremism and violence, whether online interaction increases polarization and damages democracy, or whether surveillance capitalism can ever be ethically acceptable. And I see my colleagues in the wet sciences dealing with questions that make my head hurt. Should you be able to engineer estrogen in your kitchen so you can transition from male to female? Should we engineer mice to kill off deer ticks in the hopes of ending Lyme disease?
That last question has been a major one for friend and colleague Kevin Esvelt, who has been wrestling with tough ethical questions like who gets to decide if your community (Nantucket Island, for instance) should be a testbed for this technology? What is informed consent when it comes to releasing mice engineered with CRISPR gene drive into a complex ecosystem? Admirably, Dr. Esvelt has been working hard to level up in ethics and community design practices, but his progress just points to the need for scholars who straddle these different topics.
I think we need to start well before the postdoc to start training people who are comfortable in the worlds of science, policy and ethics. Specifically, I think we should start at the undergraduate level. By the time we admit you into somewhere like the Media Lab, we need you to already be thinking critically and carefully about the technology we’re asking you to invent and build.
I was lucky enough attend Williams College, which focused on the liberal arts and didn’t seem to care much what you studied so long as you got into some good arguments. I was in a dorm that had a residential seminar, which meant that everyone in my hall took the same class in ethics. Arguments about moral relativism continued over dinner and late into the night, in one case ending with a student threatening another with a machete in her desire to make her point. It wasn’t the most restful frosh year, but it cemented some critical ideas that have served me well over the years:
– Smart people may disagree with you about key issues, and you may be both making reasonable, logical arguments but starting from different sets of core values
– If you feel strongly about something, it behooves you to understand and strengthen your own arguments
– You probably don’t really understand something unless you can teach it to someone else
My guess is that courses that force us to have these sorts of arguments are critical to unpacking the intricacies of emerging technologies and their implications. To be clear, there’s the field of science and technology studies, which makes these questions central to its debates. But I think it’s possible to sharpen these cognitive skills in any field where the work of scholarship is in debating rival interpretations of the same facts. Was American independence from England the product of democratic aspirations, or economic ones? Is Lear mad, or is he the only truly sane one?
The fact that there’s dozens of legitimate answers to these questions can make them frustrating in fields where the goal is to calculate a single (very difficult) answer… but the problems we’re starting to face around regulating tech are complex, squishy questions. Should governments regulate dangerous speech online? Or platforms? Should communities work to develop and enforce their own speech standards? My guess is that answer looks more like an analysis of Lear’s madness than like the decomposition of a matrix.
But liberal arts isn’t all you’d want to teach if the goal is to prepare people who could work in the intersection of tech, ethics and policy. Much of my work is with policymakers who desperately want to solve problems, but often don’t know enough about the technology they’re trying to fix to actually make things better. I also work closely with social change leaders like Sherrilyn Ifill, the president of the NAACP Legal Defense Fund. She came to our lab to learn about algorithmic bias, noting that if the NAACP LDF had been able to fight redlining two generations ago, we might not face the massive wealth gap that divides Black and White Americans. Sherrilyn believes the next generation of redlining will be algorithmic, and that social justice organizations need to understand algorithmic bias to combat it. We need people who understand new technologies well enough to analyze them and explain their implications to those who would govern them.
My guess is that this sort of work doesn’t require a PhD. What it requires is understanding a field well enough that you can discern what’s likely, what’s possible and what’s impossible. One of my dearest friends is a physicist who now evaluates clean energy and carbon capture technologies, but has also written on topics from nuclear disarmament to autonomous vehicles. His PhD work is on Bose-Einstein condensate, a strange state of matter that involves superimposing atoms at very low temperatures by trapping them in place with lasers. His PhD and postdoc work have basically nothing to do with the topics he works on, but the basis he has in understanding complex systems and the implications of physical laws means he can quickly tell you that it’s possible to pull CO2 from the environment and turn it into diesel fuel, but that it’s probably going to be very expensive to do so.
I’m imagining a generation of students who have a solid technical background, the equivalent of a concentration if not a major in a field like computer science, as well as a sequence of courses that help people speak, write, argue and teach technological issues. We’d offer classes – which might or might not be about tech topics – that help teach students to write for popular audiences as well as academic ones, that help students learn how you write an oped and make a convincing presentation. We’d coach students on teaching technical topics in their field to people outside of their fields, perhaps the core skillset necessary in being a scientific or technical advisor.
There’s jobs for people with this hybrid skill set right now. The Ford Foundation has been hard at work creating the field of “Public Interest Technology”, a profession in which people use technical skills to change the world for the better. This might mean working in a nonprofit like NAACP LDF to help leaders like Sherrilyn understand what battles are most important to fight in algorithmic justice, or in a newsroom, helping journalists maintain secure channels with their sources. I predict that graduates with this hybrid background will be at a premium as companies like Facebook and YouTube look to figure out whether their products can be profitable without being corrosive to society… and the students who come out with critical faculties and the ability to communicate their concerns well will be positioned to advocate for real solutions to these problems. (And if they aren’t able to influence the directions the companies take, they’ll make great leaders of Tech Won’t Build It protests.)
(I was visiting Williams today and discovered a feature on their website about four alums who’ve taken on careers that are right at the center of Public Interest tech.)
Building a program in tech, ethics and policy helps address a real problem liberal arts colleges are experiencing right now. The number of computer science majors has doubled at American universities and colleges between 2013 and 2017, while the number of tenure-track professors increased only by 17%, leading the New York Times to report that the hardest part of a computer science major may be getting a seat in a class. Really terrific schools like Williams can’t hire CS faculty fast enough, and graduates of programs like the one I teach in at MIT are often choosing between dozens of excellent job offers.
Not all those people signing up for CS courses are going to end up writing software for a living – my exposure to CS at Williams helped me discover that I cared deeply about tech and its implications, but that I was a shitty programmer. Building a strong program focused on technology, ethics and policy would offer another path for students like me who were fascinated with the implications of technology, but less interested in becoming a working programmer. It also would take some of the stress off CS professors as students took on a more balanced courseload, building skills in writing, communications, argument and presentation as well as technical skills.
Christian Sandvig is right to be worried that we’re forcing scholars who are already far into their intellectual journeys into postdocs intended to deal with contemporary problems. But the problem is not that we’re asking scholars to take on these new intellectual responsibilities – it’s that we should have started training them ten years before the postdoc to take on these challenges.
Teaching humanities cannot create ethics in coders. Until they can admit that Aaron Swartz, Julian Assange, Chelsea Manning, Edward Snowden and so on all committed crimes and that hacking into computer systems is unethical, as are DDoS attacks, not to mention copyright theft and forced copying, no progress on ethics — and crime — will ever be made.
If you cherry-pick only tolerance and rights for LGBT and women out of all the rights and needs, and decide that the crimes of hacking and copyright theft aren’t crimes at all, you aren’t ethical.
Comments are closed.