Digital or Print? Does it Matter?

November 5, 2009

Krause makes the important point that the distinction in academic publishing is not between electronic or print. That debate is dead (or at least should be dead). The real distinction is between peer reviewed and not peer reviewed. Whether an article is online or in print doesn’t change the quality of the article or the vetting process involved. Peer review is key. Krause’s discussion of this process helps us move past old debates about electronic journals.

I wonder, though, how we should take his arguments and if he isn’t being a little hypocritical. He urges scholars to adopt new forms of self publishing and calls for new approaches to the tenure process that embrace online self-publishing. But he originally published the article in CCC Online, which I assume was peer reviewed. Then, when he republished the updated version of the article, he did so in Kairos, an online, peer reviewed journal. Why did he do that? Probably because he realized that he needed to do so for tenure. Also, while he talks about online publishing as being dynamic, as he rightly notes, online journals are typically not dynamic. Once an article is published, it remains in its original form even though it would be quite simple to later change the text. I assume the same was true for his article. What do we do with the idea of dynamic academic publications? Certainly, online texts are more dynamic. If I have access to the server, I can easily go in and change a sentence or two in a published article. But doesn’t that negatively affect scholarship, at least they way we currently conceptualize scholarship? Don’t we need a certain static version in order to cite correctly and further the academic process? I think we do.

The bigger question, however, concerns how we should assign value to self-published web sites. I’m all for new publishing models, and I think the peer review system has serious problems. But I don’t really think that compiling learning resources should be given the same research weight as publishing a piece of original research in an academic journal. The websites Krause discusses (possibly with the exception of the Aristotle site) all seem like highly valuable, thoroughly researched lit reviews. No doubt, people publish lit reviews in academic journals. But I’m pretty sure that if I go up for tenure some day with a bunch of super awesome lit reviews as the core of my tenure packet, I’m going to get smacked down hard. Helpful resources are awesome, but at an R1, isn’t the point to contribute original research to your field? Do his examples really do that? I don’t think so. Maybe some of you will disagree.

I’m not saying that self-publishing shouldn’t count as scholarship. I’m just saying that it should be counted based on content, and being helpful is not the same as contributing new arguments to a field. The Map of Future Forces of Education, on the other hand, may function differently. I would have to go through it in more detail to decide for sure, but the boxes I have explored do seem to be making original arguments about future forecasts. While the idea of futurology is problematic, the site does do original work that surpasses the lit review status of some of Krause’s examples.

Finally, I’m not quite sure what to do with Hea’s article. I like the idea of the “making of” a job candidate, but her discussion focused too exclusively on being a job candidate in the field of computers and composition for my tastes. That’s not a criticism of the article; she is writing to a computers and composition audience. But I will never be going to interview as a computer compositionist, so I had trouble relating some of her work to my personal goals.

I just self-published!

November 5, 2009

Krause makes the important point that the distinction in academic publishing is not between electronic or print. That debate is dead (or at least should be dead). The real distinction is between peer reviewed and not peer reviewed. Whether an article is online or in print doesn’t change the quality of the article or the vetting process involved. Peer review is key. Krause’s discussion of this process helps us move past old debates about electronic journals.
I wonder, though, how we should take his arguments and if he isn’t being a little hypocritical. He urges scholars to adopt new forms of self publishing and calls for new approaches to the tenure process that embrace online self-publishing. But he originally published the article in CCC Online, which I assume was peer reviewed. Then, when he republished the updated version of the article, he did so in Kairos, an online, peer reviewed journal. Why did he do that? Probably because he realized that he needed to do so for tenure. Also, while he talks about online publishing as being dynamic, as he rightly notes, online journals are typically not dynamic. Once an article is published, it remains in its original form even though it would be quite simple to later change the text. I assume the same was true for his article. What do we do with the idea of dynamic academic publications? Certainly, online texts are more dynamic. If I have access to the server, I can easily go in and change a sentence or two in a published article. But doesn’t that negatively affect scholarship, at least they way we currently conceptualize scholarship? Don’t we need a certain static version in order to cite correctly and further the academic process? I think we do.
The bigger question, however, concerns how we should assign value to self-published web sites. I’m all for new publishing models, and I think the peer review system has serious problems. But I don’t really think that compiling learning resources should be given the same research weight as publishing a piece of original research in an academic journal. The websites Krause discusses (possibly with the exception of the Aristotle site) all seem like highly valuable, thoroughly researched lit reviews. No doubt, people publish lit reviews in academic journals. But I’m pretty sure that if I go up for tenure some day with a bunch of super awesome lit reviews as the core of my tenure packet, I’m going to get smacked down hard. Helpful resources are awesome, but at an R1, isn’t the point to contribute original research to your field? Do his examples really do that? I don’t think so. Maybe some of you will disagree.
I’m not saying that self-publishing shouldn’t count as scholarship. I’m just saying that it should be counted based on content, and being helpful is not the same as contributing new arguments to a field. The Map of Future Forces of Education, on the other hand, may function differently. I would have to go through it in more detail to decide for sure, but the boxes I have explored do seem to be making original arguments about future forecasts. While the idea of futurology is problematic, the site does do original work that surpasses the lit review status of some of Krause’s examples.
Finally, I’m not quite sure what to do with Hea’s article. I like the idea of the “making of” a job candidate, but her discussion focused too exclusively on being a job candidate in the field of computers and composition for my tastes. That’s not a criticism of the article; she is writing to a computers and composition audience. But I will never be going to interview as a computer compositionist, so I had trouble relating some of her work to my personal goals.

Tech and the classroom

October 28, 2009

The discussion of learning spaces in the EduCause book made me think of the classroom I currently teach in: Tompkins G126. I teach ENG 331: Communication for Engineers, and for some reason, I don’t teach in a computer classroom. Much of what we do would benefit from computers in the classroom, but I’ve had to adjust my lesson plans accordingly. Luckily, almost all my students have laptops, which is not surprising considering the fact that Engineering students at NCSU are required to purchase laptops (or at least that’s what my students tell me). So I ask them to bring laptops to class and most of them do. Here is where my tie between my personal teaching experience and the book on hospitable learning spaces comes into play: the classroom doesn’t have outlets!!! Or more accurately, the classroom only has two outlets the students can actually access. So basically, most of them get to use laptops in my class until their laptops die. That’s ridiculous. We spend all this money on fancy computer classrooms, but we can’t afford to increase the number of outlets in non computer classrooms? These are the minor issues of spatial planning that can have a major effect on educational experience.

Now that I’m done with my rant about my classroom as an inadequately designed learning space, I’ll move on to the other articles we read. Carrell and Menzel studied how students perceive different types of presentation format in a technologically enabled classroom. Their findings aren’t overly surprising: they found that added technology did little to improve outcomes or students’ perceptions. But I don’t think the finding is as important as it sounds. I wouldn’t expect students to be introduced to a new technology and automatically think it’s awesome. That’s asking way too much. We have to get used to things before we can use them correctly as presenters or before we know how to process the new technology as viewers and learners. Take Wikis for example. Wikis are really amazing educational tools if used correctly, but my students don’t want to use them. They are used to a certain type of classroom and a certain type of collaboration, and when I try to change that, they don’t always react positively. Does this mean, as the researchers put it, “there is no ‘value added’ to make the changes worthwhile”? No, I don’t think so. I think we should expect that students will not automatically love new technologies in the classroom. Once they get used to them, that will often change, as it has with PowerPoint.

The two articles on distance learning were particularly interesting to me because I will be teaching two online sections in the Spring. I think online courses present an interesting question that we should all consider: should we try to bring traditional classroom instruction into online spaces? It certainly seems like that is what Careel and Menzel implicitly suggest. They discuss the use of audio + PowerPoint as a way to move away from traditional lectures while still maintaining a lecture format. I think it’s probably a better idea to abandon the idea of porting what works in a classroom into an online course. It’s not going to happen, and skype, video chat, podcasts, etc. aren’t going to change that. At least, that’s what I’ve been told by the people I’ve met with about teaching online. They’ve encouraged me to abandon the idea that I can get the same feel of teaching in a classroom while teaching an online course. We will never be able to make online courses as personal and immediate as classroom courses, so we shouldn’t try. We should instead embrace what is good about teaching online: increased flexibility, increased self-efficacy, etc. If we don’t, we will end up with a second rate educational experience that constantly struggles to be like its older brother.

Finally, Larose and Witten’s idea of computer immediacy is kind of interesting. It’s obvious that the interface of an online course will go a long way to determining the effectiveness of the course. If the appearance is imposing, students will likely not have as positive experience with the course. But computer immediacy? That might be taking it too far. For one thing, personalized computer messages are stupid and don’t fool anyone. Saying, “nice try Jordan. Try again next time!” doesn’t really make me hate my life less when a link doesn’t work. I know it’s a computer and it doesn’t actually care if I live or die, much less if my link works or not. So unless we can design computer interaction that simulates real interaction, computer immediacy is probably not that important. And I really doubt that the computer interaction that finally definitively passes the Turing test will be designed by education professors.

Maybe I have no idea what composition is

October 14, 2009

The discussions of computers and writing in our readings this week seemed a little strange to me. I know they made a lot of sense in the late 1990’s and the early 2000’s, but the entire idea of ‘computers and writing’ as a sub-discipline confuses me. What does it mean? I assume it has to mean something more than simply writing with a computer. At this point, doesn’t almost everyone write with a computer? In one of our earlier readings, an author talked about how he can no longer write with pen and pencil for an extended period of time; his muscles simply aren’t trained for it. I’m right there with him. I have been writing exclusively with computers for over a decade now, and I don’t think I’m an exception. So that brings me back to my question: what exactly is the sub-discipline of computers and composition?

I think my problem understanding exactly what the sub-discipline includes is likely a problem with the name. I assume the sub-discipline has moved past focusing simply on writing with computers and has moved towards an understanding of new computer programs that might help writers. Am I mistaken? Is my problem really just a problem with a somewhat dated name for the sub-discipline?

Ok, second question: What is a composition classroom? Our readings this week talked about using new technologies to change the way students approach writing in a composition class room, but it hit me about halfway through the readings that I don’t know what a composition classroom is. I know what composition is, I know what a classroom is, but I apparently don’t know what a composition class room is. It seems to me that the idea of what a composition class room is is inherently fractured. At NCSU, we take a writing across the curriculum approach. Students are taught to write appropriately in different fields. But in some of the readings we had this week, the composition classroom seemed more like a creative writing class room. Maybe I’m not familiar enough with the composition literature, but all semester I haven’t been able to get a handle on what it means to teach composition. I think that inability to find some kind of core to teaching composition is a serious problem.

As an example, let’s take hypertext fiction or other types of hypertextual writing. Hypertext is a useful tool to both deconstruct the forced linearity of most writing and to place the reader in a new subject position of producer of the narrative. To be clear, I’m not blindly knocking hypertextual writing as an educational tool; I’m sure it has uses I’m not considering. But the main goal of hypertextual writing (fiction especially), is to empower the reader to construct the narrative. That’s cool, but in reality, as writers we don’t want to empower the reader. The goal of a good academic or professional writer is to control the reader’s movement through the text. In Science in Action, Latour has some really wonderful passages on how scientists control movement in their articles, and his arguments can be applied to all academic and professional writing. If I teach one of my engineers to write a proposal, I am trying to teach as many techniques as I can that help the engineer stop the reader from constructing his or her own narrative. My engineer wants to write a proposal that controls the reader’s movement and brings the reader to a hopefully positive conclusion. Same goes for academic writing in every discipline.

That might be a stupid example, but I use it as a small example to point to some of the problems I’ve had with some of our readings. For students to write successfully across the disciplines, do they have to deconstruct the writing process? Do they have to enable their readers? I don’t know; maybe one of my classmates will kill this post and let me know.

Emails and such

October 8, 2009

The most interesting part of Stephens et al.’s study was the following research question: how does email informality affect professors’ opinion of students? The authors found that informal emails annoyed professors and made them feel less affect for the student. So should we be playing grammar police over students’ emails? I think so, but I think we have to do so within reason. Within reason means we have to acknowledge that there are layers to the question of email formality. Stephens et al. write that, “Instructors might view this out-of-class communication similar to assigned written work.” That would be abandoning the “within reason” part of my argument. Emails are not formal written work; they are a less formal medium that letter writing, memos, etc., something we teach our students in ENG 331. It’s ridiculous to act like students should be graded on how they write emails.

But on the other hand, students should not be writing emails to professors with misspellings, grammar errors, and SMS abbreviations. There are many reasons why, but I want to discuss the one I see as the most important: professional development. Ideally, university should be about getting students to think critically as well as prepare for the professional world. Well, in the professional world, students hopefully aren’t going to be using “RU” in emails to their boss. So much of what we do as communicators is habit. As professors, do we have a responsibility to not condone our students’ bad habits? One thing I’m sure of…if a student spends 4 years writing informal emails to all his/her professors, there is almost NO WAY that student is going to not accidentally send a totally inappropriate email to a boss. The communicative act will be too ingrained in the student’s thinking. Communication is not a light switch; we can’t turn it off and on. If I don’t punctuate or capitalize emails for 4 years, I’m not going to do it perfectly in my 5th year. It just doesn’t work that way.

More interestingly, I have a question for my colleagues. Is it ever okay to complain to your class about the emails they send you? This past week I had an assignment due in the class I’m teaching. It was the third assignment that was due for the class, and we had discussed the requirements in detail during class. I also had an assignment sheet posted on Moodle that outlined the requirements in more detail. The day the assignment was due I received 5 emails from students asking me how to do something. I am not kidding here….I could have answered all 5 questions with “Google it.” No joke. If they simply googled “how to _____” it would have been the top hit for all 5 questions. I responded politely and just emailed them the link to the top Google hit, but it really annoyed me.

I’m sure I’m not the only person in this class who has had students contact him/Kati through email rather than simply googling a term. I have nothing to compare this to because I never taught before email, but do you think the way we use email in educational/professional environments has made students less self-sufficient? I guess I’m trying to get at a question of cost for the student. Before email students would probably have to go to a professor’s office to ask a question. Now all they have to do is send a three sentence email. So basically there is very little cost for the student. Having to go to the professor’s office was annoying, so students would have been more likely to figure out an answer themselves. Now all they have to do is hit ‘send’ and not worry about it. Should I say something to my students? Playing off the theme of professional development, do I have a responsibility to say something to my students? I know that if I was a project manager working 60 hours a week, I would consider firing an employee who constantly emails me questions he/she could easily figure out. So would it be appropriate to say something? And to finish with a very broad question, do you think the way students use email negatively affects student self-sufficiency?

The question of difference

October 1, 2009

The articles this week brought up interesting issues having to do with different kinds of difference. We read articles about homosexuals in the classroom, women in the classroom, African Americans in the class room, and disabled individuals in the classroom. One of the more interesting ideas in the articles was Taylor’s idea of non-negotiable difference. Non-negotiable difference “is meant to emphasize that profound, deep-seated difference is, by definition, non-negotiable.” In other words, there are differences between an upper middle class white male and a poorer African American inner city youth. Neither one can truly walk in the other’s shoes. The borders between these individuals, in Taylor’s words, are “impermeable, even online.”
Interesting. I don’t think many of us would argue against the idea that we cannot truly understand the subject position of other people from different cultural contexts. This kind of radical subjectivity seems to be a staple of much post modern though about identity, and it got me thinking about an exchange I still remember that occurred during my undergraduate capstone class for my concentration in African American literature. The class was on James Baldwin, and I still remember the following conversation (I can’t remember what I said to start it, but it was something about the text Giovanni’s Room). I start with the response of an African American female in the class.
Her: You can’t understand what Baldwin’s saying. You’re not black; you can’t understand the alienation in his writing.
Me: You’re right, I’m not black and I can’t understand his racial alienation. But I’m reading this as an intensely masculine passage, so maybe you can’t understand his perspective as a male.
It seems kind of silly, but that exchange stuck with me. If this were an imaginary dialogue, I would have a homosexual male in the class jump in and tell my classmate and me that we can’t understand Baldwin because we aren’t homosexual. Then maybe I would have an expatriate jump in and say that none of us can understand Baldwin’s writings because we’ve never fled our home nation looking for belonging.
Maybe the example works, maybe it doesn’t. What I’m trying to get to is the question of identity. How do we define it? Are these differences truly non-negotiable, and if they are, how do we deal with those questions? Radical subjectivity gets at the idea that we can never truly walk in anyone else’s shoes because we don’t share totally common experience. True. So how can we do our best?
Taylor starts getting at my identity question on the last two pages of his article in his discussion of body language (which I didn’t find particularly helpful). In the story he told, the three African American students assumed three different roles in the classroom. Their identity was partially determined by their race, but it was also determined by dialect and computer skill. So what is the identity here? Is the male an African American computer user or a computer user who happens to be African American? The example gets at the often shifting grounds of identity; something we should remember so we can avoid essentializing identity by cramming it into predetermined categories. I think of my ENG 331 class, a class which pretty much only engineers take, meaning I have an extremely skewed gender balance. No one in the class denigrates the one female in any way, but we haven’t got to our group work yet, and I do worry that the group she is a part of (which will be 3 males and her) will force her into a predetermined gender role. But what should I do? If I intervene when it’s unnecessary, don’t I run the risk of determining her identity for her? Maybe she sees herself as an engineer first, female second. If I read a situation incorrectly and jump in, aren’t I telling her she’s wrong? Aren’t I telling her that she will always be a female first, engineer second? I obviously don’t know for sure, and I’ll confront these situations as I gain more classroom experience. I do know that these readings got me thinking about issues of difference, issues we will likely all have to face in our careers.

Where should criticism draw the line?

September 23, 2009

Cynthia Selfe’s article on technological literacy brought up an important issue we should all keep in mind: the issue of access. Access to technology is obviously still a problem ten years later, both in the global digital divide and in our national digital divide. As computers are becoming more and more pervasive, it is now almost impossible to get a high paying job without at least average computer literacy. The consequences are obvious. People from the underclasses who never have the opportunity to use computers are now at an even larger disadvantage than they were before the PC revolution. We need to work on policies, funding structures etc. that help underprivileged people gain access to computers and information in general. National broadband policies, the open access movement, and other movements toward both increased connectivity and increased access to information are attempts to enact a better future. Of course, none of these issues solve the most basic access issue: some people simply can’t afford computers. Making the Internet more accessible does nothing to help them. Hopefully programs like the “one child, one laptop” program and the increasing proliferation of cheap netbooks will start to bridge the digital divide, even if only slightly. It’s clear we have a long way to go.

I have a problem with Selfe though. I like to think of myself as a scholar who takes a critical approach to what I study. I like it when other people take critical approaches. However, I recognize that a critical approach is often not particularly appropriate; actually, it’s often totally inappropriate. Selfe writes that teachers in English/language classrooms and first-year comp classrooms “need to recognize that we can no longer simply educate students to become technology users—and consumers—without also helping them learn how to become critical thinkers about technology and the social issues surrounding its use.” Well…that sounds awesome. People should think critically about technology, they should focus on access issues, and they should focus on social implications. But not in first-year comp classrooms and not in literature classrooms!!!! I’m sorry, but telling students they have to produce a paper using Microsoft Word does not mean that you also have to go over access issues and disempowerment. I’m sorry, you just don’t. There should be classes on that, and I would have absolutely no objection to requiring students to take a course that critically examines these issues. That way, students would be aware of the social effects of technologies, but professors wouldn’t have to lace every technology use in the classroom with ethics discussions. To repeat, access and critical reflections are extremely important. But not every English/Comp professor whould have to deal with them. If they do, then it cuts into time they could be doing other things. First year comp classes should teach students how to write academically and think critically about writing academically, not think critically about computers. Classes on ancient rhetoric should teach rhetoric, they shouldn’t teach about uneven distribution of computing. I agree with so much of her article, but I just can’t agree with some of her requirements in the localized settings she identifies.

The idea of critical examination of technology can be extended into a critical examination of existing laws about technology use, as the authors do in the Copyright manifesto reading. I was excited when I saw that we were reading a copyright manifesto written by rhetoricians. Then I realized why most copyright articles aren’t written by rhetoricians; they show a rather paltry understanding of copyright. I’m sure the copyright manifesto would have been interesting if I’d never read about copyright. But I have. So I can think off the top of my head at least 5 writers who do a much better job and look at the issue with more depth and a much stronger understanding. Oh, and can I just say….their understanding of Fair Use is overly simplistic, and I hope no one reads that manifesto and walks away thinking they understand something about copyright. Publishing something for profit is only 1 of 4 parts of a fair use judgment, and often not the dominant of the four. Oh, and doing something for educational purposes is also only 1 of 4 fair use pieces. It doesn’t necessarily protect you. Combined with other factors, it will help,  but that’s it. So students aren’t automatically protected.

And as a final note, the authors write that “What we are not focusing on in this piece are the debates surrounding the ability of individual authors and artists to make a living producing and selling their work. What we are focusing on is the agency and action writing teachers can take in a world of media monopoly.” Well if you don’t focus on the main debate, then don’t talk about the issue. It’s nice that they wrote a manifesto, and their intentions are good. But an ideal copyright law would find a way to balance the creator’s ability to profit with other’s ability to access and use the work. If you drop the first part of the debate, then don’t talk about copyright.

Critical approaches to learning technologies

September 14, 2009

Bad futurology: “But I remain skeptical about the import of the change. Apparently eighty percent of home computers are used exclusively for games. I bet many of them will fall into disuse” (Ohmann).
Good futurology: “Modern newspapers, which are already produced electronically, may largely disappear in their paper form” (Anson citing Negroponte).
Predicting the future is a dangerous thing. One may end up looking like a prophet of a new age, but one may also harm an otherwise strong argument with predictions that don’t come true. Ohmann’s article is a good example. He was very prescient in arguing that many of the new computer jobs people are forecasting will be menial labor jobs that require very little computer skill, or very little skill at all for that matter. But he also seems disdainful towards the idea that in the future people will have to achieve at least an average level of computer literacy to be successful. Well, he was wrong. Most people now have to be able to use a computer to perform even basic professional tasks.
So what are the consequences? Does this produce a new, even more strongly enforced power structure? Well, I guess so…maybe. On the other hand, maybe it’s all a little overstated. I don’t want to imply that I am belittling the digital divide in the argument that follows because access is one of the key issues of all my open access scholarship research, so I am well aware that there is a huge difference between those with technological access and those without. But it seems to me that focusing solely on the technological aspects of access obfuscate larger structural issues. Did computers introduce divide between suburban schools and inner city schools? No, of course not. They may have exacerbated that divide, but the divide was already horribly wide because of cultural constraints and extant economic constraints. Computers have made it worse; I’m not arguing otherwise. But some of these arguments imply an overly rosy picture of a world without computers. Inner city schools still generally had worse teachers, fewer supplies, fewer incentives, etc. Those students were still worse writers and less prepared for high paying positions in the work force. It’s important to see the digital divide as an extension of an already existing divide, not a divide caused solely by technological development.
Another point struck me as I was reading Anson’s interesting article. Anson writes that “It is when the prospect of fully interactive, technologically advanced distance learning conflicts with our most principled educational theories that we feel an ideological clash.” Interesting statement. Fairly intuitive; a statement I think 99% of writing professors would agree with. But I find the statement slightly problematic in relation to other parts of Anson’s article and the other articles we read. All these discussions are about issues of power. Corporate power; institutional power; professor power; student power. Well, does it occur to any of us that all the critical research we read on distance education is written by people whose jobs are negatively affected by these technologies? I’m not saying distance learning doesn’t conflict with some established pedagogical theory. It does. But there is a conflict when the people addressing that conflict are people whose jobs will likely be negatively affected. I saw this recently on the TechRhet listserv I subscribe to. Someone wrote about a study that found distance learning students performed better than traditional students. The initial reaction wasn’t one of reflection. For the most part, the listserv member immediately questioned the study, often times without even reading the study. There is a conflict here. A conflict we should keep in mind.
Selfe & Selfe’s discussion of interface raised some interesting points about the technologies we use. I am a big fan of examining interfaces as an important part of computer use and I am a big fan of critical approaches. Their work did raise questions about computer interfaces as perpetuating structures of power. However, some of their arguments seemed overstated and some of their suggestions seemed slightly ridiculous in the face of Anson’s more nuanced approach that took into account rapidly decreasing institutional budgets. For instance, the issue of language was at the core of Selfe & Selfe’s argument, particularly how many programs come with English as the default language. The point is well made that making English the default does place users in a certain subject position. But looking at it from a Tech Comm perspective, what is the alternative? There HAS TO BE A DEFAULT LANGUAGE. There has to be. We can’t develop programs and not set a default language. The most widely spoken language in the U.S. (for the time being at least) is English. It doesn’t make sense to develop a widely distributed program and not choose English as a default language. Selfe & Selfe’s solution is for academics to play a larger role in interface development, but all that will cost a huge amount of money. Money institutions often do not have. We have seen some progress on the language front, and now many programs let you choose the default language at the installation step. But the desktop interface has reached hegemonic status, and the suggestions to develop new interfaces seem to have fallen by the wayside. Is it too late? Now that we have raised an entire generation on the desktop interface, can we introduce new interfaces? Doing so would help fight implicit racism in computer design, but could it be detrimental to students in these already underprivileged positions? Say you teach students using a garage interface because that is more representative of the socioeconomic roles they are familiar with. Ok, that sounds good. But what about when they try to get jobs that use the corporate interface? Idealism vs. pragmatism. Always difficult to manage.

New educational techs

September 11, 2009
The inclusion of mobiles in the horizon report is a key point. Questions of access are always important, and education that utilizes mobile phones is an important piece of any plan to increase access. Far more people have mobile phone than have computers. Estimates put mobile phone adoption world wide at over 3.3 billion (not sure if this is unique users or mobile phone subscriptions). Many people in the developing world will access the Internet for the first time through a mobile phone. Here in the U.S. we have a lower mobile phone adoption rate than other countries (particularly Nordic countries) but our adoption rate is still high. So we can assume most students will have mobile phones. The bigger issue is that many people will not have the mobile phones discussed in the horizon report. Most phones are still not internet-enabled and are not geo-locatable. An important issue going forward is not to conflate mobile phone adoption rates with smart phone adoption rates. Most of the educational uses discussed in the Horizon report are for newer, more expensive mobile phones. Not everyone can afford a data plan (I know I can’t. Dan, how do you do it?) So I do have a problem with saying mobile phones are widely adopted and then describing educational uses that can only be applied to the minority of these devices.
I agree with Jacob about Cloud Computing. THe Cloud is the future, and it will be a highly useful educational tool (tools?). We already use cloud applications all the time in the form of Google docs, gmail, and I think Wikis and blogs. The most important thing is how Cloud Computing can cut down costs for students at less privileged institutions, just like Jacob says. I’m think about the NCSU AFS space, and how it lets students store things on NCSU servers in case they don’t have computers or their own Internet access. Even more impressive is the Virtual Computing lab at NCSU, which “provides a remote access service that allows you to reserve a computer with a desired set of applications for yourself, and remotely access it over the Internet.” Students can access these virtual computers from elsewhere and then use programs they otherwise wouldn’t be able to afford. Good stuff, and a good first step in helping bridge some of our serious access gaps.
Final technology I’m going to address: geo-location enabled devices. I think there is great potential for teachers to use these devices for educational purposes. I also don’t think they raise the concerns Jacob brings up in his post. 1. if schools provide students with the devices, then the students will only use them during school hours, so they won’t be “stalked” during a time when they aren’t supposed to be supervised anyways. 2. Even if the school asks students with GPS enabled devices to use a certain application, the student will know when the application is running and the location being reported. It would take quite the nefarious school district to install a location aware app that is always on and alway reporting location.
One of the more interesting ways I think location aware apps can be used is as a creative writing tool. Whrrl, a great geo-location service, has users put together stories through location and pictures of location. With the simple tagline, “what’s your story” Whrrl has created a geo-location enabled story telling space for interested users. The services can obviously used in other ways as well, the most obvious being for history classes, parks and recreation classes, architecture classes, and field work in the natural and social sciences. I really think people will come up with much more creative ways to use this ability though, and I’m kind of excited to see what they’ll do in the future.

The inclusion of mobiles in the horizon report is a key point. Questions of access are always important, and education that utilizes mobile phones is an important piece of any plan to increase access. Far more people have mobile phone than have computers. Estimates put mobile phone adoption world wide at over 3.3 billion (not sure if this is unique users or mobile phone subscriptions). Many people in the developing world will access the Internet for the first time through a mobile phone. Here in the U.S. we have a lower mobile phone adoption rate than other countries (particularly Nordic countries) but our adoption rate is still high. So we can assume most students will have mobile phones. The bigger issue is that many people will not have the mobile phones discussed in the horizon report. Most phones are still not internet-enabled and are not geo-locatable. An important issue going forward is not to conflate mobile phone adoption rates with smart phone adoption rates. Most of the educational uses discussed in the Horizon report are for newer, more expensive mobile phones. Not everyone can afford a data plan (I know I can’t. Dan, how do you do it?) So I do have a problem with saying mobile phones are widely adopted and then describing educational uses that can only be applied to the minority of these devices.

I agree with Jacob about Cloud Computing. THe Cloud is the future, and it will be a highly useful educational tool (tools?). We already use cloud applications all the time in the form of Google docs, gmail, and I think Wikis and blogs. The most important thing is how Cloud Computing can cut down costs for students at less privileged institutions, just like Jacob says. I’m think about the NCSU AFS space, and how it lets students store things on NCSU servers in case they don’t have computers or their own Internet access. Even more impressive is the Virtual Computing lab at NCSU, which “provides a remote access service that allows you to reserve a computer with a desired set of applications for yourself, and remotely access it over the Internet.” Students can access these virtual computers from elsewhere and then use programs they otherwise wouldn’t be able to afford. Good stuff, and a good first step in helping bridge some of our serious access gaps.

Final technology I’m going to address: geo-location enabled devices. I think there is great potential for teachers to use these devices for educational purposes. I also don’t think they raise the concerns Jacob brings up in his post. 1. if schools provide students with the devices, then the students will only use them during school hours, so they won’t be “stalked” during a time when they aren’t supposed to be supervised anyways. 2. Even if the school asks students with GPS enabled devices to use a certain application, the student will know when the application is running and the location being reported. It would take quite the nefarious school district to install a location aware app that is always on and alway reporting location.

One of the more interesting ways I think location aware apps can be used is as a creative writing tool. Whrrl, a great geo-location service, has users put together stories through location and pictures of location. With the simple tagline, “what’s your story” Whrrl has created a geo-location enabled story telling space for interested users. The services can obviously used in other ways as well, the most obvious being for history classes, parks and recreation classes, architecture classes, and field work in the natural and social sciences. I really think people will come up with much more creative ways to use this ability though, and I’m kind of excited to see what they’ll do in the future.

Quality Control

September 8, 2009

I think the most interesting issue brought up in this week’s Born Digital is the issue of quality. The lack of effort people put into finding information never ceases to amaze me. It’s a lack of effort that affects all of us and teachers. I’m sure almost all of you have had a student turn in a paper that cites totally B.S. sources. I’m not talking particularly about Wikipedia, but Wikipedia is probably the most famous example. What has happened that students now think it’s ok to cite Wikipedia? It’s not just the quality issues; more importantly, students are citing an encyclopedia!! I haven’t cited an encyclopedia since I was 7 years old. These issues of quality raise some interesting questions about this generation of students. I’ll address the questions below.

First, what is the best way to teach information literacy? I’ve read a bunch of articles discussing methods we can use to teach students about how to evaluate information they find on the Internet and in print. Some of these articles have made good arguments, but I’ve never read anything about information literacy in the digital age that is significantly different than the critical reading skills I was taught when I was young, which brings me to my next question….

We undoubtedly have access to more info than ever before, and overload definitely happens, but what are the fundamental differences between judging quality before the Internet and after the Internet? The most obvious difference is the loss of the gatekeeping function held so tightly by print for the last 500 years. It was hard to get things printed, so most stuff that was printed was at least slightly respectable. Now almost anyone with an Internet connection can publish material. We’ve lost our information gatekeeper. But I have a problem with that idea. I don’t know about you all, but I was never taught in school that I should simply trust something because someone was willing to print it. We might have had an additional quality judge (publishing house), but I was always taught to be the final critical reading judge. So students now have more information to sift through, true, but ultimately it’s still their job, just like it was our job when we were young, to determine the quality of a source. So do we occasionally overstate the difference and hide behind chic terms like “information overload”?

Final question(s): are the problems we see problems of information literacy or are they problems of effort? Has something about the ease students access information made them lazy? I think so. I read about ways students can differentiate between good and bad material, but most of it is fairly intuitive. I think the biggest problem is that they don’t care. Let’s take citing Wikipedia as an example. I love Wikipedia. If I want to know about a topic, I go straight to Wikipedia. If I want to research a topic, I go to Wikipedia and then move on to the material cited in the Wikipedia article. Why don’t students take that last step? Laziness. There’s no reason to cite a Wikipedia article because most articles (at least the ones worth reading for research) have citations the students can read. It takes a pronounced unwillingness to engage with a research topic to ignore all the deeper material that is only a click away. So how do we address this? Better question for all of you: do you think it’s really a question of students not knowing that an article is poor, or is it more of a case where they don’t want to go find the better article?