Charles Kenneth Roberts

Politics, History, Culture

Links to Read: December 2018

Since I lack the time (or time management skills) to blog as much as I’d like, I’m going try using this blog as a platform for something useful: A reading journal of sorts, in which I will link to some interesting articles I have read and which I think other people may be interested in. Will I stick with it? Tune in to find out!

How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually.: “Studies generally suggest that, year after year, less than 60 percent of web traffic is human” is perhaps the most interesting thing here. Increasingly I view the internet like the automobile: something that, despite its great value and potential, turned out to have an overall harmful impact on society.

Something this article doesn’t really address, but which I think is of the same phenomenon, is how the internet or I guess more specifically social media drives the most artificial, robotic behavior, not by force but by some perverted choice. I recall how distinctly unnerving it was during the Kavanaugh hearings the way in which people who were clearly real humans, not bots, would go through Twitter’s search function to respond almost word-for-word with the same argument to random people’s tweets. You can also include people who respond to tweets or in the comments section with the identical “fake news!!!” type responses over and over again. This isn’t a spontaneous human response, but it’s not a bot, either. We’re becoming the lamest kind of cyborgs.

Here’s a sort of related story about how to abuse the Amazon Marketplace. Here’s a sort of related story about, well, whatever this is.

How Mark Burnett Resurrected Donald Trump as an Icon of American Success: This article captures much of what’s wrong with America. It’s tough to pick a single excerpt, but this is relevant:

“The Apprentice” was built around a weekly series of business challenges. At the end of each episode, Trump determined which competitor should be “fired.” But, as Braun explained, Trump was frequently unprepared for these sessions, with little grasp of who had performed well. Sometimes a candidate distinguished herself during the contest only to get fired, on a whim, by Trump. When this happened, Braun said, the editors were often obliged to “reverse engineer” the episode, scouring hundreds of hours of footage to emphasize the few moments when the exemplary candidate might have slipped up, in an attempt to assemble an artificial version of history in which Trump’s shoot-from-the-hip decision made sense.

The Insect Apocalypse Is Here: I did not expect this first batch of links to be such downers, but here we are.

Nancy, a 1930s comic strip, was the funniest thing I read in 2018: Here’s something uplifting. Nancy is so good.

Advertisements

My Academic Notebook Productivity System

There are a lot of productivity systems out there. I have developed one that works for me, and I’m laying it out here in case you might find it useful, too. It’s a sort of mash-up of especially Getting Things Done, but also Bullet Journal, Autofocus, and Raul Pacheco-Vega’s Everything Notebook. This system won’t work for everyone, but I think it will work for people with jobs like mine. I work at a small, teaching-oriented college where everyone has to wear a number of different hats. During the summer, I can focus on bigger or more intensive projects, but during the school year, I have what I think of as lots of little stuff to deal with. I have a heavy teaching load and a heavy service load, and I try (with limited success) to get some scholarship done during the school year as well. My day is interrupted frequently with classes or scheduled meetings. This is the system I developed to manage that.

It has three basic parts:

  • Everything I have to do goes on a to-do list, either the master weekly to-do list or its own separate to-do list.
  • Everything that has to be done at a specific time goes in an electronic calendar with notifications.
  • Everything has a home where it belongs in a physical or electronic filing system.

The notebook. The most important part of my personal productivity system is the notebook to-do list system. For me, the most valuable part of Getting Things Done is the concept of universal capture–you never have to worry about remembering things because they are all written down and organized. But I am easily and often distracted. If I don’t have a reminder directly in front of me, I forget. I need to be able, at a glance, to see everything I have got going on right now. So, directly beside my keyboard on my desk lives a notebook. During the school year, it looks something like this:

2018-05-10 15.09.34

I like to use legal pads because they’re longer, but you could use the notebook you want to use. I rewrote this sample week to be easier to read, which distressingly means that this is as neatly as I can hand write anything, even when I specifically trying to do it. If I have a weekly to-do list that is longer than a single notebook page, I take that as a sign that I have to re-prioritize or reorganize something in my work life.

At the beginning of each semester and the summer, I go through the notebook and write down the week number, date, and any to-do items that are reoccurring (like grading online discussion threads every week) or known well ahead of time (like preparing for final exams or revising a paper to present at a conference). As the semester goes by, I add new things as needed. If there’s going to be a meeting I need to prepare for in three weeks, that goes in the list three weeks from now. If someone mentions something that I don’t quite know what to do with right now, it goes on the list for further processing. Each item is an action item–it is a specific task that I can do in a reasonable amount of time to further some goal or meet some responsibility.

In this system, most of the things I have to do are given a specific day to be done, but not a specific time. This allows for greater flexibility. I used to try scheduling specific times for grading or revisions, but one emergency meeting or rescheduled essay and the whole day is off-kilter. As I go, it is easy to add new items to the list; because the day of the week is written in the margin on the left, adding new things does not mess up the order or confuse me later. If something is too big for a single action item, it gets a separate list of its own. Like I assume most college professors, I produce a tremendous amount of ephemeral printed material, so usually the back of unused meeting agendas or old quizzes becomes the home for sub-to-do list and gets tucked behind the appropriate weekly page in my to-do notebook. Full-blown projects get their own separate to-do list and folder(s), with the relevant weekly responsibilities also being listed in the notebook.

As each item is finished, it gets marked with a single line. Sort of like the Bullet Journal, I have a system of indicating whether a task was completed, deferred, or not completed. So, by the end of the day on Friday, my to-do list might look like this:

2018-05-10 15.26.02.jpg

Something marked through is completed. An arrow indicates that an action item was moved–perhaps a committee meeting was rescheduled, so I didn’t have to prepare for it. An X indicates that I did not complete an item. I can mark through something but also X it to indicate to myself that I partially or unsatisfactorily completed an item, perhaps that I didn’t sharpen up a paper presentation as much as I had hoped and had to add it to next week’s list. Things with just an X weren’t completed; sorry I didn’t finish grading your essays, HIS 105! Anything more complicated to remember than that becomes a note or memo in the appropriate file.

In addition to the listed action items, the notebook can serve as an inbox of sorts (because if I have another separate inbox where I do not constantly see everything, I will forget about it). Papers to process, a new copy of an academic journal, or notes I wrote to myself can be piled on/under the notebook to remind me to deal with them. I don’t need to write “Read Agricultural History” when I can just set my copy under the notebook, where it is impossible to miss. If I have something that’s more substantial than a single action item, like reviewing a book, it can be turned into to-do list action items.

Finally, at the bottom of the notebook is an orange tab. That’s my “everything else” section, a combination list of ongoing projects and things to think about when the semester is over or which otherwise don’t require immediate attention. A fellow professor mentioned using Kahoot in class and recommended it, but after some thought I decided I’m going to need to consider if or how to implement it more carefully, so it goes to the end of the notebook. Periodically (i.e. when it gets to be more than a page long, so I can’t see it all at once) I reorganize that section of the notebook into a series of more manageable reminders or to-dos.

The calendar. When I was an undergraduate, every semester, I did the exact same thing. I got a physical planner, maybe one handed out by the college, or one gifted by an optimistic relative, or one I wasted money on by purchasing myself. I wrote all my classes down in the schedule and put important due dates throughout. I’d use it for about two weeks, and then one day I would forget to look at it, and then that was it. If I printed out a schedule and taped it over my desk, that sometimes worked better, but the real lesson is that I don’t remember things and need to be told what to do constantly like a child.

Thanks to the magic of computer technology, I can have something tell me what to do constantly. I use Google Calendar (which links to my college e-mail address, since we’re a Google-powered campus, but you can use whatever works for you), and every single scheduled event goes into it. This means all committee meetings, college events, meetings with students, classes, and even my office hours. Everything gets a ten minute notification which makes a distinct buzz on my phone and a little pop-up on my computer; things that are farther away or for which I am afraid I will forget to prepare get an earlier reminder, too.

The end result of this is that I never worry if I have missed a meeting or that I am supposed to be somewhere that I am not, because my calendar tells me what to do. During the work week when I am not at a scheduled something, I’m doing whatever is on my to-do list for the day, or in the unlikely event I am ahead, taking care of things for later in the week.

The filing. In a way, my notebook is a combination to-do list and chronological index of or hub for my ongoing projects and responsibilities. Instead of constantly having all kinds of stuff rattling around in my head, I can safely let it go, assured that I will remind myself what I need to do and when I need to do it. Everything else gets put away until it is needed. A research project, course redesign, or some other big enterprise I’m working on has its set of to-do lists and notes, but it’s at home in a manila file folder in my desk unless I’ve specifically noted to do something with it on my weekly to-do list or I come across some relevant information that needs to be processed.

Banning Laptops in the College Classroom

There has been another internet dust up about whether or not professors should ban laptops in their classrooms. Matthew Numer says students should be insulted by laptop bans, and Kevin Gannon writes that the argument behind the laptop ban “fails miserably as an intellectual position about teaching and learning.”  But I think the arguments against a laptop ban are, generally speaking, wrong. With all the usual caveats and exceptions, in my experience laptop bans are a good idea in introductory-level college classes. I would (and do) recommend it to my colleagues.

To me, there are three strong arguments against the laptop ban, and I think the two articles above do a good job of capturing them: 1) laptop bans are unfair to students requiring disability accommodations; 2) laptop bans are a bad response to larger issues in education, like poor pedagogical techniques or overcrowded courses; 3) laptop bans fail to treat students like the young adults they are.

Only the first argument has any real traction with me. That’s why I meet with the director of our campus’s disability services office each semester before the class begins, to discuss how the ban might affect the students taking my class that semester. I’m fortunate to work at a college small enough to make it realistically possible to discuss the impact of classroom policy changes like this at the individual level, and I’m aware not every professor has that luxury. Also (as I think everyone who does not allow laptops in class feels compelled to explain), I don’t have a blanket ban; students can get special permission to use a computer. I don’t call people out in class for using a laptop, although I do try to catch them as they leave to make sure they know the policy. So far, most of my students requiring accommodation prefer to record lectures in addition to taking notes by hand, rather than typing notes in a computer. The most regular laptop users I have are our frequently-injured student athletes; there is consistently enough someone who has permission to use a computer that I don’t think using one would single out a disabled student, no more than is the case with other more obvious accommodations like not being in class or leaving class early to take an exam or quiz with extended time. I don’t think this is a perfect solution; I can imagine a shy student who would be better accommodated using a laptop but deciding not to do so because they don’t want to (in their view) cause trouble. But it’s also the case that the argument for better disability accommodations could work the other way, in favor of a laptop ban. Limiting the extent to which additional screens can distract someone is an upside to a technology ban that people rarely talk about.

That laptop bans are a bad response to larger issues in higher education: well, yeah. Allowing laptops in class isn’t going to make the college enough money to fund the additional faculty members to allow me to have the ideal seven-person seminar-style class I want to hold. It won’t make the college enough money that we could loan all our students laptops or have all our classes in computer labs so that I could re-structure class meetings to incorporate technology (at my institution, I can’t be certain a student has a car or a cell phone, much less a laptop; I’ve yet to find a way to incorporate electronic devices into the classroom that doesn’t leave some people behind). Until then, we’re all making do.

The most common reason for allowing laptops in the college classroom is that students are adults and should be allowed to make these choices on their own; if they don’t choose to participate in class, that might be a rational choice or not, but it’s their decision to make. Really, we the professors have the obligation to win their attention. To quote Matthew Numer:

Our students are capable of making their own choices, and if they choose to check Snapchat instead of listening to your lecture, then that’s their loss. Besides, it’s my responsibility as an educator to ensure that my lecture is compelling. If my students aren’t paying attention, if they’re distracted, that’s on me. The same goes for anyone presiding over a business meeting.

(I feel like they will fire you if you have a meeting with the boss and play on your laptop instead of at least appearing to pay attention, but it’s been a while since I had a job outside academia.)

and Kevin Gannon:

Ultimately, it comes down to how we see our students. Do we see them as adults, or at least as capable agents in their own learning? Or do we see them as potential adversaries in need of policing? Do we see them as capable of making their own decisions and learning from those that aren’t necessarily the best ones?

The arguments that students are grown-ups and that it’s the fault of the professor if they/their course are insufficiently engaging seem to me to be manifestly contradictory. One thing that successful adults do is pay attention to things they find boring. I have to do it all the time! I’m not enthralled navigating SACSOC regulations or doing my income taxes, but I do it, because it’s important. I don’t personally find everything in history (even in my field or my classroom) equally exciting or interesting, but there are boring things you have to understand to do the neat stuff. Everybody wants to be able to order wine in a fancy Parisian restaurant; nobody wants to learn irregular verb conjugations.

I teach at a tiny, mostly-associate’s degree granting institution in the rural Deep South, largely teaching non-majors in survey courses. I can’t always count on my students to know what the best decisions are; that’s part of what I am trying to teach them. I could let a student learn that watching soccer online instead of paying attention will lead to a failing grade; in fact, I did, in one of my larger classes before I stopped allowing students free use of electronic devices in class. I don’t feel like doing so empowered the student.

Perhaps most importantly, I don’t want to allow students to make bad decisions that make learning harder for other students in the classroom. I’m hesitant to write further, even anonymously, about my current or former students on the internet, but I think most people who teach survey-level courses have had students who simply don’t know how to be in a college classroom, or students who are smart enough that they can get through a survey class without trying particularly hard in class. I have seen such students act in a way that make it more difficult for other struggling students to succeed. I’ve been teaching long enough to not take it personally when students look at their phones rather than listen to me, but students participating in classroom discussion or making a presentation might take it differently.

I can imagine an introductory history survey that uses laptops or other electronic devices in a way that makes that class better (though I can’t easily imagine it realistically happening where I currently teach). Generally speaking and in my experience, students using electronic devices do not improve the classroom, and they usually make it worse. And I’ve not heard differently from a student. Anecdotes and data etc, but I’ll finish this with a story: At the end of the semester, I usually give each class as a whole a bonus opportunity: for every day that the entire classroom is engaged (including reading the material beforehand etc but most importantly, nobody focuses more on their cell phones than the class), the entire section gets a bonus on the final exam. From what I can tell, this improves grades, but more than that: I’ve also had several students tell me that I should do the same thing the whole semester, that they felt like they did better when they knew they couldn’t become distracted. I’ve yet to find the student who does better with more distractions in the classroom.

“Good riddance”

There are a variety of things wrong with the Democratic Party; there are a lot of reasons why it’s, generally speaking, losing in the 21st century. I think the most important reason is that, despite what the base wants and the overall popularity of the policies supported by rank-and-file Democratic membership, the party leadership is beholden to the economic elite who favor policies which benefit the already-rich and powerful. Universal healthcare is a good example of a policy that most Democrats, especially the most motivated and committed, want but that the party leadership has hesitated to get behind.

But that’s not the only thing wrong with the Democrats. Josh Marshall has written an article about Alabama’s Republican nominee for Senate (and likely winner) Roy Moore, whose most important financial backer has expressed Christian supremacist and neo-Confederate ideas, and he’s associated with the League of the South (as has Moore).

That’s awful, though it’s not terribly surprising. But whenever you look at stories like this on liberal/Democratic-leaning websites, or when they’re posted by prominent liberal Twitter accounts, there’s a common tendency in the comments or responses. By my count a solid majority of the replies to the above-quoted tweets and a sizable chunk of the replies on TPM are some variation of “Good riddance!” or “Let them go” or “Kick them out.”

So: an explicitly Christian supremacist with ties to more-or-less openly white supremacist organizations want to take control of a state that votes 35-40% Democrat and is 30% or 35% non-white (depending on how you count Hispanics). For a sizable portion of the people who care enough to post about it on the internet, the response is to just let them do that.

This isn’t a matter of whether Democratic policies favor those voters, or whether Republican policies harm them. Compare the way that conservatives respond to stories about, say, the (much over-stated) liberal bias on college campuses. There are calls for affirmative action for conservative academics, calls for the firing of professors who make inflammatory liberal statements, insistence on equal rights for using college spaces for even the most extreme conservative/anti-liberal speakers, and demands that college administrators or even state legislatures step in to protect conservative voices. There’s an element of “That’s what you get when you go to liberal colleges” or talk about conservative alternatives, but it’s in general extremely supportive for conservatives who decide to attend those institutions (a self-selected identity, unlike where you’re born).

I know it’s the internet, and it’s Twitter, and it’s the comments section. I know a lot of people think it’s funny to post such thoughts and wouldn’t actually favor secession. But I also know that lots of black people didn’t vote in 2016 because they didn’t think the Democratic Party did anything for them. Black voter turnout declined sharply in 2016, and despite the most anti-immigrant candidate in decades, Latino turnout didn’t increase very much. Republicans were effective in reducing (and in some cases, repressing) turnout, but Democrats were also bad at increasing turnout.

The reason so many black and rural Americans feel like Democrats don’t care about them is because lots of Democrats don’t really care about them, or at least they don’t act like they do.

Southerners, southerners, and southerners

People were rightly critical of this idiotic tweet sent by Virginia gubernatorial wannabe Corey Stewart:

idiot tweet

But I am glad for it. Perhaps you, like me, are a history professor teaching American history, and it’s the end of the semester, so you’re in the middle of or getting into a discussion of secession and the Civil War. This tweet is a perfect teaching moment, a distillation of everything wrong with how we talk about secession, the Confederacy, and the South.

Stewart’s comment is about the wrongly-described “Confederate” monument (actually a monument to an attempted Reconstruction-era white supremacist insurrection). Stewart describes “a Yankee” and “a Southerner” as though these are real and natural categories. The implication is that someone from the North and someone from the South would have opposite perspectives about this monument. Stewart’s understanding of “Southerner” means white supremacists who supported the Confederacy; the South’s black population, or for the matter the large white Unionist population and whites like Italians who also faced racialized violence, is completely erased.

I think I am going to start class today with this tweet.

Perhaps the most important part of this tweet, which really does an amazing job of packing so many wrong and bad assumptions into so few characters, is the “don’t matter” part. This rhetorical maneuver asserts that people who oppose white supremacist public monuments don’t care about history, southerners, or the South. People on Twitter have clever comebacks about how really? nothing is worse?, which is fun and enjoyable, but we shouldn’t allow the rhetorical boundary-setting of this tweet to go unchallenged. Nobody is saying these monuments or the history they represent don’t matter; they mattered, and still matter, all too much.

Thinking about teaching online II

I am once again revising the way that I teach online classes, a process I previously documented here. For a while, I have been doing discussion-based online classes, but I have cut more and more from the course, and now it has become a very simple thing. I’ve slimmed the class down to a single imperative: do you know how to ask and how to answer historical questions about the period this class covers? If students can do that (which means at least a familiarity with a variety of different skills and knowledge), I will feel like I have done my job.

When I first started teaching online, I had students take weekly quizzes (always multiple choice, because that’s what works for online quizzes), and I never felt like they really learned anything from it. All it proved was that students were familiar enough with the textbook chapter to look up vocabulary terms before the time ran out on the quiz. I never really liked giving quizzes in the first place, online or face-to-face; they have always felt punitive, a way to force students to do the readings. Being a historian isn’t learning an unconnected series of facts (though of course you do have to know things to do history). But like a lot of people, I gave quizzes because I took quizzes when I took a history class, so that’s what you do.

I cut quizzes from my online classes two years ago, but I think I was letting that mentality influence my approach to the online class. Last fall, my on-campus American history classes wrote review essays (of The Marrow of Tradition and Why We Can’t Wait, both of which I recommend for use in the survey), so I had my online class write the same essays. Great! Except, no, not great. Those books worked as a way to bring in and discuss primary sources and how we understand what we know about history and all sorts of other history-class-type-questions. But access to primary sources isn’t a problem in an online class: If you have the internet, you have access to more primary sources than you can ever use. In my online classes, we already look at primary sources every week, so adding these two books to what we were doing broke the rhythm of the course. It wasn’t clear how the assignments belonged to the class or furthered its learning goals, because they didn’t. I know very well that an online course is different from a traditional one, but it’s so easy to let what we’re used to doing creep into teaching in new environments, even when we’re being careful about how we do it.

I think students do the best in any class when it is clear what the point is. Why are we reading this book or asking these questions? It has been evident at times that students did not understand the point of discussions, for example, because they did not understand how a particular essay or primary source related to the larger historical context. To make sure that students understand what we’re doing, I have taken what I think is the most important single part of being a historian and made it the entire class.

So, what does the course look like now? It’s all built around asking and answering historical questions based on our textbook (The American Yawp, which includes some primary sources) and some outside readings, videos, and additional primary sources I provide each week. Except for the first week (which is about how to ask a good history question and the famous Five Cs), students are required to write one good history question and answer it, in the form of a shortish essay. My history students are writing weekly essays. Since you can look up anything on the internet now, there are very strict requirements that this be their own words, because writing something in your own words means you have to understand it (ideally), and cited. In addition, students have to respond to three other students’ posts, expanding on their questions and essays and discussion in some way. The essays and discussion are collectively half the grade (30% questions and essays, 20% discussion).

I have done something similar to this in the past, but all due once a week. This made discussion worse, since frequently a big chunk of the class wouldn’t post until the day of the deadline. So, this semester, questions and essays are due on Wednesdays, with discussion taking place by Fridays. This means slightly less time is potentially available for discussion, but I found that most of the discussion takes place in two or three days anyway; it’s hard to sustain conversation much longer than that. Will students be confused about multiple weekly deadlines? I hope not! There is always a bit of a learning curve in the first couple of weeks of an online class; my expectation is that everyone should understand the groove by the end of week three.

Jonathan Rees said that he had the problem of asking too little discussion and writing in his online classes, which led to weak discussions with small classes. I think I had the opposite problem: I have in the past asked students to write three questions and answer three other students’ questions; this resulted in some students writing hurried answers, and it led to a scattershot discussion which saw lots of questions but few good answers. My hope is that fewer “starter” posts will result in more substantial conversation. I also hope that the responsibility of having to answer their own questions will lead to be a more thoughtful formulation of each student’s question.

In addition to the questions and discussions, two exams make up the other half of the grade. The fun part: These exams will be, as much as possible, made up of the questions that the students themselves wrote; I will be going through the discussion each week and pulling out the best questions and the things students are the most interested in to make up a study guide for the midterm and final exams. I may have to provide them with questions if they miss important stuff, which would (hopefully) be a learning opportunity about how to ask the right questions about historical material. But the goal is to get to a situation where students know how to ask the right questions about historical material, and then the tests determine how well they know the answers to those questions.

I am trying to boil my online classes down to the essence of history as a discipline. This semester, I have all the work laid out ahead of time, but if it goes well, next time I will only lay out the first half and give myself the option (should the students be ready for it) to add more sophisticated assignments in the second half. That means more work for me, but it would be enjoyable work.

The Alt-Right and Perspective

Most decent people who are aware of the phenomenon are unhappy about the  “alt-right,” a batch of wannabe Nazis who are well known for being loud racists on the internet. I think people are right to have a negative response to the movement. Anyone who proudly claims an alt-right identity is an awful person who makes the world worse. Worse, the success of the alt-right’s preferred candidate, Donald Trump, and mainstream media’s (as usual) ham-handed and incompetent attempts to understand it risk normalizing a strain of proud, open white supremacy and misogyny in a way that hasn’t been accepted in polite company in decades. These are bad things, and we should resist them.

But the alt-right didn’t elect Donald Trump. There aren’t enough of them; if Twitter didn’t exist, nobody would have any idea what the movement even is. And the alt-right didn’t invent racism. American racial inequality is, by most objective measures, worse today than it was thirty years ago. This isn’t a new and recent trend.

Furthermore: The alt-right isn’t locking people up and ruining their lives for possessing marijuana. The alt-right didn’t disenfranchise six million voters because of felony conviction. The alt-right didn’t deport 2.5 million people under the Obama administration. The alt-right isn’t propping up the Saudis’ horribly destructive war in Yemen. The alt-right didn’t organize drone strikes that have killed hundreds of civilians and which Amnesty International says may amount to war crimes. The alt-right isn’t causing climate change or single-handedly stopping the implementation of meaningful measures to address it.

The alt-right is bad and should be opposed. But we should not let the easy and obvious villains obscure the everyday atrocities in which, ultimately, every American is complicit.

Sample Size

The imminent college football playoff announcement has me thinking, curiously enough, about the run-up to and aftermath of the 2016 presidential election. The big debate college football faces, which it has never been able to adequately address, is how to judge championships and rank the “best” team versus the “most deserving” team. College football is inherently subjective in too many ways. Even with the four-team playoff, teams depend on the luck of other outcomes to determine whether they go to a playoff, and a playoff committee will necessarily be subjective in its decisions as to who to include in a playoff and how to rank them. Even the apparently objective criteria of winning a conference title almost always depends on the luck of who you beat and to whom your conference mates lose, especially when leagues without round-robin schedules go by overall conference records, rather than divisional records, to determine divisional champions, not to mention that college football currently has five major conferences for four playoff spots. Other sports leagues, like the NFL or the Premier League, solve this problem in very different ways that college football cannot, for a variety of structural reasons, borrow.

Trying to decide who the best team is, trying to understand outcomes, requires an appreciation that even the best teams will not (because of the role that luck and chance play in any sport, especially college football) necessarily be the team that wins. Take a coach like Nick Saban. “Lucky” seems like an odd adjective for him; he’s been by virtually any measure the most effective coach in the country since Alabama hired him. Saban is the best at the most important skills in college football, recruiting and putting together coaching staffs, and he’s excellent at Xs and Os and player development. But! Really only twice (2009 and this season so far) has Saban made sure his team did everything objectively which could be done, and even 2009 took quite a bit of on-the-field luck. 2011’s national title took a bizarre rematch, 2012 required late season losses by multiple teams, and so on. This is how college football works; even the best need a lot of luck.

This makes ranking “the best” all but impossible. You could make an argument that Penn State, who beat Ohio State, is better than Ohio State. But Pitt beat Clemson and Penn State both, and I don’t think anyone would seriously argue that Pitt is the best team of those three. Despite the faux objectivity of the playoff, chance and uncertainty are a part of college football, and that means that sometimes statistically unlikely things will happen. You can say the team which has the better overall season is better, or the team which won a head-to-head contest is better, and both can be right. In all sports, but especially in college football, there is no objective way to determine this.

Like almost everyone else, I was very confident and very wrong about who would win the 2016 election. After the election, there was a lot of talk about what the results said about polling: all pollsters are just frauds, we cannot trust them in the future, this suggests larger undercurrents about who can and cannot be accounted for as voters to a degree which undermines any effort at election prediction, and so on. But I don’t think, after considering it almost a month later, that the election really tells us anything about the effectiveness of polling or predictions. Trump’s election looks in many ways like a freak occurrence: his margin in Wisconsin, Pennsylvania, and Michigan appears like it’s going to end up being less than 100,000, a tiny victory . Clinton won a significant popular vote victory. The polls predicted that Clinton had a much higher chance of winning the election than Trump did; that still seems basically correct. It’s statistically unlikely that you’ll get heads if you flip a coin three times in a row, but that happening once doesn’t change the likelihood in the future or the probability of the prediction in the past.

Elections are like college football: sometimes, things are weird and the most likely outcome doesn’t happen.

2016 and watersheds

The 2016 election has been a uniquely depressing experience. Somehow the Republicans managed to nominate, from a candidate pool ranging from awful to merely incompetent, the worst possible option: a racist authoritarian whose existence is a stinging rebuke to the notion of American meritocracy and who undermines easy assumptions about the progress of equality in America. Hillary Clinton, practically the embodiment of what people mean when they complain about hawkish neoliberalism, is somehow the good guy in the 2016 race.

One element that makes the race so unpleasant is its steady presence: the downside of being constantly connected is that you’re, well, constantly connected. Social media and cable news overflow with obnoxious political commentary, stories of Trump’s latest bigoted gaffe, or horserace commentary. You sometimes feel like you can’t get away from this horrible experience through which the American people are inexplicably putting themselves.

Part of this is a reflection of the increasingly (and regrettably) partisan nature of American political culture, but I also think part of it is a need to make the race more exciting than it actually is. Like play-by-play announcers insisting that anything can happen in the fourth quarter of a 45-14 football game, the clickbait/24-hour news media has a vested economic interest in obscuring the extent to which the election has already been decided. In all likelihood, the race is over, and Donald Trump has lost. Most organizations predicting the outcome, as the New York Times forecast indicates, overwhelmingly favor Clinton:

capture

Scrolling down to the state-by-state predictions, Trump would have to hold on to all the Republican states, win every toss up state, and pick off one of the states leaning Democrat to win. Polls can be wrong and anything can happen etc etc etc, but a huge Clinton win picking up 350+ electoral votes seems far more likely than a Trump win by any margin.

There are a few explanations for this state of affairs. One obvious reason that Trump is losing is that he’s Trump. Clinton is a weak candidate, and the things that are necessary to be good at campaigning are not her strengths (or are outside her immediate control, like the continued influence of sexism). But Trump is just about the worst imaginable candidate, magnifying Clinton’s strengths like political experience and temperament for the job while minimizing her weaknesses because he shares so many of them.

I’m curious how much of this is bigger than Trump, though. Demographic forces and political shifts appear to have favored the Democrats for several presidential elections in row (Congress and state elections being a different matter). Since the end of the Cold War, the smallest electoral margin any Democratic president has managed was John Kerry’s 251 in 2004, also the only time since 1988 a Republican won a majority of the popular vote. The two Republican victories were extremely narrow; the Democratic wins have been solid.

I wonder if historians will look at the 2008 presidential election as one of those watershed elections, like 1896 or 1932, that permanently changed the political landscape. 1896 seems like a good comparison: a partisan era when Democrats usually had safe states to count on, but a time when Republicans almost always had the advantage going into a presidential race.

And yet! Republicans hold a solid majority of the political offices around the country. Republicans make up a majority of governors and, even in this awful national year, expect to hold on the House of Representatives with a toss-up for control of the Senate. Maybe a better analogy is 1968, when the New Deal coalition broke and, ushering Republican dominance in presidential elections against Democratic control of Congress.

That said, one can easily take this kind of talk too far; I remember people talking about the inevitable generation of Republican ascendancy even after the 2000 election. Barack Obama’s election may represent only unhappiness with the unpopular George W. Bush and a desire for change, while a Clinton victory might best be understood as a referendum on the exceedingly unfit Trump. But it looks to me like most of the trends at a national level favor the Democrats; it would take considerable Democratic incompetence or scandal (or something attributed as such) to shift the balance.

The Free State of Jones

I saw Free State of Jones last night, and I really liked it. It was historically accurate, as much as I think a film like that can be. Puncturing the myth of a solid South during the Civil War is crucial, and making Reconstruction an important part of the story is much needed.

This review will have spoilers, if you haven’t seen the movie (and you should!).

People who know a lot more than me can tell you about the specifics of the history, but I’m particularly interested in one aspect of the movie as a movie, rather than as history. Smart reviewers like Christian McWhirter and Glenn Brasher agreed with critics who found the last act of the film, about Reconstruction, the weakest. Admittedly it’s a small sample size (my wife and myself), but the audience I watched the film with found the Reconstruction sections, jumping ahead using photographs and captions, just fine. The captions perhaps didn’t capture the realities of Reconstruction, but watching a black man registering sometimes illiterate voters and seeing what became of him as a result made Reconstruction real and weighty in a way I haven’t seen done before. I think it works as a whole for a few different reasons.

The most obvious reason is that the movie is just well made. Good character building and good plotting can make up for each other; we’re invested in the world and the people. Viewers care about what happens next. I particularly liked the way that Rachel’s story was so carefully told, not as explicit in its depictions of horrors as, say, Twelve Years a Slave, but still making clear that even the best-treated slaves faced unthinkable hardship and mistreatment. Matthew McConaughey almost edges over into White Savior territory, but the film makes it clear that the participation of blacks is just as crucial as is the participation of whites in the brief blooming of African American political participation.

I also think this movie matches the times, capturing the zeitgeist of our less optimistic age. As Glenn notes, “Further, perhaps audiences SHOULD walk away from a Civil War movie with a dejected feeling of “was it all in vain?” as white supremacy is restored in the post-war South.” This is not a happy time for American political culture; police brutality toward African Americans is clearly not going away, and Trumpery runs rampant. It’s been clear for a while that improvements in race relations and opportunities for minorities in America have stalled out. Some important gains have proven to be small victories that don’t address larger structural and social problems.

But the most subtle reason I think that the movie works is because of how it’s structured in terms of the roles that the characters play. Newton Knight is the main character of the film, but I don’t think he is a protagonist. Knight seems like the protagonist; he does a lot of stuff, and he’s the most famous actor. But the protagonist of a movie is who (or what) drives the plot. Knight wants to not fight in a war for rich people, he wants Confederates to stop taking his neighbors’ stuff, he wants to maintain the interracial community he has created. This is all reactive! You can have a protagonist who is reacting to things if they drive the plot forward (Luke Skywalker), but Knight doesn’t.

The real protagonist of the movie is the Confederacy itself, or more generally the existing system of white supremacy, embodied in a series of unlikable villains. White supremacy sets the plot in motion (before the movie begins), it drives the action, and it changes the most over the course of the film. What does this protagonist want? It wants to maintain itself; it wants to survive – a classic protagonist’s impulse. The film starts at the beginning of Newton Knight’s story, but for the protagonist it’s in media res; the movie starts, after all, in the middle of a battle in the middle of the Civil War. We the audience should already know but are reminded frequently that the Civil War is over slavery; that’s taken for granted. White supremacy has begun a war to protect itself. To survive it must win the war; to win the war it must take from its non-slaveholding population and protect the system of slavery.

I think you could say this movie has two opposing protagonists; Knight decides what he wants (the declared principles of the Free State of Jones), but it takes him a while to get there (which happens – it takes The Dude half the movie to figure out his narrative in The Big Lebowski). But I think Knight works best as the antagonist. The protagonist wants things, but Knight is trying to stop it. He is effective, but he isn’t necessarily decisive. Due largely to off-camera events (something that is emphasized when Sherman doesn’t provide much help – the larger Union effort doesn’t see southeast Mississippi as strategically important), white supremacy falls to its lowest point.

White supremacy tries to keep up the same system, forcing Moses’ son back into the fields, but that doesn’t work in the long run. We see Knight stand up against black codes, but so does the United States Army. The jumping ahead during Reconstruction doesn’t happen at the protagonist’s peak, but at its nadir. So white supremacy changes; instead of fighting in the open in battles and seizing crops, it takes shape in fraudulent vote totals and men in masks setting fire to homes or committing murder off-screen. Newton Knight changes a little, but white supremacy changes the most.

I don’t know if this entirely explains the difference between audience reactions (as of right now 71% approved at Rotten Tomatoes) and the critics’ views (42% approval). Critics might see this clunky structure as a problem. But I think audiences identify with Knight. He’s who you want to be (unrealistically so, but that’s the point of movies): bravely standing up to a larger system, doing the right thing even if you’re facing a system you ultimately can’t defeat.