Below is a somewhat summary of the talk I gave in conjunction with Amy Collier, Daniel Lynds, and Jim Luke at OpenEd17. The tl;dr is that I’m convinced that need to start publishing analyses of the open platforms we adopt in order to bring transparency to a number of metrics to include not just data ownership and stewardship but broader metrics such as who has monetary vested interest in the success of the product. As I’ve returned home, I’ve began to construct in my head a real tangible way in which we can start to build a community to do this work, much of which is being inspired by Jon Udell and Mike Caulfield’s collaboration on Digital Polarization. If any of this sounds in the smallest bit interesting, please comment or reach out.
Situating My “Open”
When I find myself at OpenEd, I often feel the need to explain myself. As smarter people than myself have mentioned, the word “open” itself is often a moving target. So I want to quickly give some context to how I interpret open. David Wiley wrote in April that whether you are talking about OER, open access, open source, etc. They all involve two things:
- Free access.
- A formal grant of rights and permissions normally reserved by the original creator.
This implies open as an end product. So I’ll go ahead and say I see open as an end product less interesting. Open as a space that can produce open products: I find much more interesting. I’m equally weird though in that I don’t necessarily blindly subscribe to open pedagogy. I’m also less interested in open as a pedagogical strategy than I am open as a digital environment for situated learning, communities of practice, and identity construction. For me, much like how Lave and Wenger positioned communities as practices, it’s much more of a learning theory than a pedagogical practice.
Legitimate peripheral participation is not itself an educational form, much less a pedagogical strategy or a teaching technique. It is an analytical viewpoint on learning, a way of understanding learning.
– Lave and Wenger, Situated Learning : Legitimate Peripheral Participation
At OU we’ve tried to position our Domain of One’s Own project, OU Create, as a space to be inhabited (or in contrast—not inhabited, maybe abandoned). Yes, it fits the definition of free access and of specific permissions. But it doesn’t have to.
I often go into classrooms to lead demonstrations on how to use our domain platform (and I do want to call it a platform and highlight that because I’ll come back to that point later), and I’ll tell the students what it means to register a domain. That’s it yours. That you own it. That you own the data. And you can take it with you after this class, after you graduate, or not. Long term, it’s your garden to tend to and you can decide whether you want to.
And often I’ll get a student who wants to contest me on the issue. Do we really own this? Does this mean I can do with it what I want? Can I decide whether it’s public or private? Etc.
And the answer to all of those questions is yes. What’s interesting is that I’ve never had a student ask about ownership of their textbook or ownership of their LMS course.
As an institution, OU Create has lent us the opportunity to talk about what does it mean to give students their data. How do we define data and how do we support that notion of taking it? What obligation do we have to help them to protect the data? What do we mean when we say we respect a students privacy? How do we support free speech?
I want to be clear and say that I’m not trying to say that domain of one’s own is the best and only solution for having these conversations. In the same way that I believe forcing someone to stand for a pledge to a flag defeats the purpose of a pledge to a flag, I believe requiring someone to own their digital identity defeats the point of ownership as ownership is a choice. Openness is simply the ingredients in which someone is afforded the opportunity to make that choice.
Misinformation and Platforms
But as someone who is often thinking critically about the types of virtual spaces we require our students to enter, I think this moment in time is a better wake up call than ever to reconsider those spaces—including the open ones. I don’t think anyone was surprised to hear that Facebook and Google have been required to turn over Russian-linked data to the federal government for investigation. It’s been reported recently that YouTube, Tumblr, and even Pokemon Go also turned over data. Shame on you if you didn’t see that one coming.
As both a faculty member and practitioner in journalism, I care deeply about these issues, specifically fake news, and have found Mike Caulfield and his work on digital polarization to be a canary in a coal mine. Mike has recently argued, while citing a Stanford History Education Group study, that the issues involved in disinformation extend well beyond the concept of fake news. The black and white argument is there’s hoax sites and “real” news. But there’s a large grey area. Intention is much harder to recognize, pull apart, and understand. As Mike said at 10:30am, in quite possibly the quickest citation ever, the problem is we are all vulnerable to charges of biasness.
I’ve been thinking recently about how we begin to apply these analysis techniques used for evaluating information or disinformation and apply them to platforms. What metrics should be using to evaluate OU Create as a platform? In recognizing not all open is good and closed is bad, that’s its much messier than that, how do make sure we are continuing to be critical of OU Create knowing that it’s ultimately still just a platform for data creation and possibly dissemination.
As I find the conversation in the OpenEd community start to concentrate around platforms–specifically OER textbook platforms–I want to ask to what standards are we holding these platforms accountable? Further, how can students evaluate these tools and the company’s practices and intentions?
One website I often show my students is Terms of Service; Didn’t Read. This site is a community collaboration that seeks to offer both easy to read explanations of the Terms of Services for popular sites like Google and YouTube (and even gives it a letter grade!). Here’s some of the questions they are trying to uncover:
- Do you control the copyright of your content on this platform?
- Can your content be removed at any time without prior notice?
- Do they monetize your data for third parties?
- Is your content permanently deleted if you delete it?
- Do they contribute their developments as open source projects?
- Can the terms be changed at any point without notice?
These are indeed some of the right questions and are really helpful. Unfortunately for my own need, they’ve only gone deep into a few platforms, a lot of their findings are inclusive, and very few have overlap with edtech.
In 2012, Audrey Watters develop The Audrey Test, a set of yes or no questions for edtech products that goes beyond TOSDR to include some of the questions more specific to education
- Do you work closely with instructors and students to develop your product?
- Do you offer data portability to students?
- Do you offer an API?
- Do you meet accessibility standards?
- And, finally, do you have a revenue strategy that involves something other than raising VC money?
I like that last question because it does get us closer to understanding the intent of the company in developing the platform (Note: Part 2 of the test is equally valuable). Now I want to tread lightly here knowing that we have many attendees this year that are either looking to give or receive funding. I don’t mean to say external funding is bad, but I don’t also want to say it’s always good. What I do believe is that it’s really helpful when organizations that receive funding are open and transparent about what they’ve received, who they received it from, what the funders intentions are, how that money will be utilized, etc.
I bring up this conversation because when the revenue model for the web is inherently either selling content, advertising, or a mix of both, these questions help inform what happens to student’s data and the topic of this conversation. And as much as I was to speak towards DoOO with rhetoric such as student agency and digital identity, all of these ideas hinge on just that–data.
I want to end with a few recommendations:
- As a community, we need a more comprehensive strategy for how we evaluate open and OER platforms. It has to extend beyond access to permissions to include business model, growth model, and intent, but am still not certain what that comprehensive list looks like. One example is the live annotation of Slack’s Privacy statement that was led by Kristen Eshleman and Bill Fitzgerald.
- We need to continue to be willing to be critical of those within our community and we need to allow others to be critical of our own work. Caulfield also told us we all have biases. And for when our own biasnesses fail–and they fail–we need to support those beyond the institutions whose critical analysis of our practices is necessary. At this point, there’s really only one person and that’s Audrey Watters and she’s such a much needed voice. So please support her.
- Last, I want to echo some of the comments we heard in David Bollier‘s keynote: the conversation needs to extend beyond end-products like open source, open websites, open textbooks, to be thinking about what I was referring to as “open as a situated learning space” or what he refers very wisely refers to as the commons.
Featured Image: “Platform” by Martin L is licensed under CC BY 2.0