I’ve said this in a half a dozen Zoom meetings, but the past five weeks have been like a high-speed car chase. From the perspective of leading the Office of Digital Learning, it has felt like I am in a car going 100mph down a road that designed to go 35mph. It started out as remote learning for two weeks and remote work felt like blowing out a tire. Then two weeks turned into an entire semester, which forced me to illegally run a red light and turn down another street. Now that summer courses have been called off, I feel like I’m headed off I-Spring20 and am haphazardly headed towards the exit for Hwy Summer20.
Today I am exhausted. Exhausted in the body-wants-to-shut-down-but-can’t-sleep way. This is the second time it’s happened and probably the first “normal” weekend. Whatever that means anymore.
Which gives me a little bit of time to write and recap some lessons learned both from an instructional and administrative viewpoint.
I’m teaching two courses this semester: Intro to Advertising and Contemporary Problems in Advertising. Intro to Ad is your junior-level undergraduate survey course with 120 students; your standard lecture fare. Contemporary Problems is a senior-level undergraduate seminar course that covers ethics and social problems of advertising with 16 students.
Intro to Ad has a lot of your usual lecture suspects such as four exams and chapter quizzes. But before you go ragging on the course design, spare your early judgement. I drop the lowest test grade except the final exam. The final exam is worth much less than other exams to purposely lower test anxiety. Three chapter quizzes are dropped. There’s a group component that takes place four times a semester. AND my textbook is fully OER. I’ve built positives like safety nets and engagement to make lemonade.
Contemporary Problems is about as student-centric as they come with student-led lectures and case study discussions. All assessment is writing-based.
Four weeks ago, I locked in on a game plan: try to change as little as possible.
Now I’m a big fan of innovative online courses with who’s-its and what-its. But these ain’t online to begin with. And since we agreed to a certain set of standards at the beginning of the semester, I felt it was my duty to disrupt as little as possible knowing that 136 worlds around me were about to change.
Contemporary Problems meant moving to synchronous video chats: same time, different place (Zoom). Again, I would like to reiterate this would not be my first choice for a fully online course, but (at least from my perspective) the transition has been relatively seamless. The course structure and number of students translates fairly well to the medium. It’s not the feeling of the classroom, but it is doing the job well enough.
For Intro to Ad, I was NOT going to lecture live on Zoom. Instead I am providing recorded lectures. These are chunked into mini videos that usually range from one to five minutes. In total, the amount of material is much more concise; often 30 minutes total. These are provided in little MyMedia playlists in Canvas:
In lieu of in class engagement, I’ve created some super simply low stakes assessments that are just quick knowledge checks for students.
My exams have always been on Canvas for Intro to Ad so it didn’t feel like there was going to be any major changes there. Normally, students show up on an exam day. They receive an access code and take the exam in class. The test uses question banks and randomization, so no students receive the same test. The biggest bonus is that students have immediate feedback on how well they did by the time they walk out of the room.
This week was the first exam to take place remotely, and–surprise, surprise–I learned a lot about how these kinds of exams negatively impact specific student populations. For example, I have a number of students that receive an accommodation that affords them extended time on exams. Canvas has the ability to give students extra time on a test BUT if the student comes to the end of an availability window for an exam, Canvas prioritizes the availability window over time limits AND extended time. This meant students who were expecting extra time felt incredibly rushed. A major bummer to learn.
Additionally I had students who, for reasons I don’t need to get into, just didn’t feel ready to take the test on that specific day. In all of these cases, students were granted an alternative date or time to take the test.
I guess if I wanted to, I could have been a hard liner and said, “You knew the rules from the outset. This window has been set for long enough for you to prioritize your academics. Plus, how do I not know that you haven’t received information about the exam from a peer?” Sure. Some faculty have done that and I imagine that they good intentions for doing so. But these aren’t normal circumstances and thus I have elected to prioritize empathy over virtually everything else.
From an administrative position, we’ve received a lot of questions from faculty about ensuring academic integrity. Being in the Office of Digital Learning, this is a common conversation that we have with faculty about appropriate assessment for online learners. We have a fairly standard approach to having faculty think through assessment:
- Modify your assessments.
- Create question banks
- Make assessments open book that require analysis
- Avoid Googleable test items
- Require students to synthesize personal experiences with course materials
- Have students do interviews
- Incorporate peer-evaluation
- Provide frequent feedback on writing assignments
- Give clear instructions
- Inform students about the Academic Integrity
- Be clear about what constitutes academic misconduct and plagiarism by showing examples
- Let students know when collaboration is and is not appropriate
- Have students sign honor pledge for the course and major assignments/exams
- Utilize built-in Canvas tools
- Quizzes have features like quiz banks, randomization, and Moderate Quiz
- Exams can be set with relatively short availability windows (less than 48 hours)
- Don’t show answers until after the due date
- And then, LAST, online proctoring.
For a more thorough version of this, our office partnered with the OU Office of Academic Integrity to write an article on Teach Anywhere.
Now I felt some of your eyes roll into the back of your head from where I’m sitting. I get it. I’m not a fan of online proctoring for a number of reasons including the additional cost to students and the evasiveness of the technology. As you see above, we provided a litany of possible approaches faculty can take.
Even still, we’ve found reasons over that online proctoring is necessary. For example, we have an online Masters of Accountancy program. The state of Texas requires that to be eligible to sit for the CPA exam, if a student is enrolled in an online program, their exams must be live proctored. My guess is that this is an outdated rule that came into existence sometime long ago as a mechanism to defend against for-profit online diploma mills, but we are still living within those rules.
So, while we recommend proctoring, we don’t recommend it as the first stop down Integrity Road (I am really starting to lean into this car chase analogy, aren’t I?). We recommend it as a last resort, and, while I’m not a huge fan, I also understand that I am in the position of limited knowledge about what’s most appropriate for that instructor’s course and will defer to faculty on what their needs are. Yet, even once faculty land on proctoring, they realize how difficult the companies are to work with. It can sometimes require a MOA between the faculty or department, may require additional softwares for students, and requires you to give the company enough of a heads up so they are appropriately staffed and understand the full set of rules you set out (Do they have to provide an ID? do they get a scratch piece of paper? Are certain calculators allowed or not allowed?). The point is that some faculty think that online proctoring is a get-out-of-jail free card where they will have to think less about amending assessments. The truth is in most cases it requires more work.
My opinion? “Ensuring academic integrity in online instruction” is a total head fake. When cheating happens (and, yes, it does happen just like on-ground) the onus is as much on the instructors as it is the students. Instructors likely:
a.) didn’t set proper expectations and just expected things to happen the way that they thought about it in their head (guilty, btw)
b.) didn’t think through the assessment enough and/or
c.) weren’t thoroughly engaged in the practice of online instruction
But, given the recent move to remote instruction, we’ve been getting a lot of questions and felt it appropriate to investigate tools that could be made available to faculty in the event that proctoring was an absolute necessity. In my opinion, we needed to prioritize a few factors:
- Make sure students weren’t being asked to pay for proctoring. We quickly made sure that it was documented that students paying for proctoring during times of remote instruction is not allowed.
- We needed a tool that was reliable. Yes, it needed to meet technical standards, but it also needed to literally be there when we needed it. If you haven’t read lately, live proctoring still requires humans and they are stuck at home. That’s become problematic for proctoring companies who are having to temporarily suspend live proctoring.
- Preferably the tool is easy for instructors to utilize. Ideally, this means that it integrated into Canvas quizzes, our LMS, rather than a fully separate assessment platform such as TopHat.
We landed on Respondus fairly quickly. IT had already integrated Respondus’ LockDown Browser into Canvas, so it was expected that Respondus Monitor was technically feasible. For those unfamiliar with the tool, Monitor is a webcam-based tool. It records students taking tests and then leverages AI-based tools to flag potential acts of misconduct.
The OU IT Learning Spaces team quickly was able to make the integration and start training support staff. Shout out to that team who is doing a bang up job by the way. But it was during the trainings that we began to fully realize the limitations of the tool. For example:
- LockDown Browser requires Windows or MacOS. It won’t run on Chromebooks or mobile devices. Respondus has an iPad app, but it has terrible reviews and IT was not able to successfully validate it on the iPad.
- It requires students have a webcam and microphone.
- It does not function well with screen readers.
In many ways, these felt like deal breakers. We are in a crisis situation where, with local libraries across the country and our own computer labs closed, students only have access to what they have access to. While this tool was going to allow us to message to instructors that a proctoring solution was available, it feels like a cheap band-aid at best and a highly inequitable tool at worst.
I began to voice my concerns to senior administration that OU needed to reconsider any messaging about proctoring. A concern I had was that when you say “We have proctoring but the tool is problematic,” often the latter part gets ignored. Instructors might zero in on the first half of the statement. Further, messaging had just came out about Zoom security issues. Back-to-back messages about being careful about Zoombombings and securing exams may lead some instructors to throw in the towel with online.
After a week of sitting on messaging, we were still continuing to get a lot of questions about proctoring, so it felt necessary to go ahead and write an email. Mark Morvant has been really championing the messaging and I have to give the guy all of the credit for being a senior administrator who is willing to be an active, vocal participant in these times. He ultimately wrote:
Respondus LockDown Browser and Respondus Monitor can be enabled on quizzes in Canvas but should only be used after instructors have thoughtfully considered and fully exhausted all other options for deterring cheating. This tool is considered highly inequitable and strongly discouraged during periods of remote instruction where students may not have access to approved devices.
I’ve spent more time than I wanted thinking about online proctoring, but I am glad where we’ve landed for now. Phil Hill has written about the phases we are likely to see and I mostly agree with his revised version of where we are headed:
I actually think we’re somewhere between Phase 2 and 3 according to Hill’s adoption spectrum (I think that’s good?) which means I have more to write about soon with respect to where we are headed as we move towards remote summer. The goal is that we take advantage of the extra time we have to get out in front to help faculty consider what remote 2.0 looks like.