Archive for the ‘Assessment’ Category

2015 Resolution – Reflect on Conservation

January 3, 2015

Progress on reducing my direct carbon footprint

Following on my conceptualization for the solution to reducing my direct carbon footprint (this analysis), here is the year in review:

Reduction. I think my theme for 2015 needs to be reflection on conservation, and its nuances.

In previous New Years posts I have tracked our car milage and was pleased to see our progress reducing miles driven. Alas, the reduction was lost in 2014. The lesson: bike/walking to reduce miles in town is easily overwhelmed by driving out of town, which should be obvious, it takes quite a few avoided short trips in town to equal the milage of one trip out of town.

2012 miles 2013 miles 2014 miles
Krista’s car (red) 7927 6313 7370
My car (white) 5241 2336 4472
My pickup (blue) 1059 2078 1576
Prius (silver) new 12/4/14
totals  14227  10727  13418

My friend Stephen has a longer dataset and can demonstrate real progress reducing his driving, so it is possible.

spaeth carbon wedge car

In our cars, reduced use requires constant vigilance. In contrast, the area of lawn I mow is being reduced steadily by orchards, gardens and landscaping at the Cookhouse. I haven’t used the 15-year old riding lawn mower/snowblower in 12 months. Since, I’ve proven its possible to manage what is left without the rider, it needs to go away this spring.

Another notable experiment in reduction was to put a timer on our hot water heater. Now we make hot water for morning showers and again for evening dishes. While the savings from not maintaining hot water is small, we have proven in the past 6 months that we don’t lack for hot water when we want it. This experiment needs more study. For example, can we time the water heater so we use up much of the hot water and only store tepid water (rather than having the water heater reheat the water we just used and then storing that hot water)?

Substitution. Another of the strategies to reduce my direct carbon footprint is to substitute technologies.

The Cookhouse was built with all LED lighting and I thought I was done converting the Barn, but the other day I found one more CFL — a small one in a reading lamp. The house is partly converted, the Kitchen, family room and bathrooms are done.

My efforts at substituting LED lighting for CFLs are producing limited results; my home electric bill is not going down much (if at all), because the refrigerator, freezer, dishwasher and electric dryer are such a large fraction of the use that they overwhelm the savings in the lighting.

The used Prius that Krista will drive in place of the “red” car appears to give her 40+mpg vs the previous 25+mpg in “red car,” so if we can hold the miles driven steady, it should be a decrease in fuel used.

Replacement. The oven in our gas stove died last spring and (sigh) there are no parts to repair a 10 year old stove. The process of deciding has been slow, but we are headed toward an induction stove, all electric. The decision process was explored in this column. Replacing this appliance will produce a permanent decrease in our direct use of carbon, but a small one compared to the gas water heater. I’m having the electrician get me ready to do the water heater, but can’t afford that change yet.

While the 15 year old gas lawn mower is still running, I’m considering replacing it with an electric one. Since I’m not sure how that will work in when the grass grows fast in the spring, I’ll keep the gas one around for another season.

Generation. I have some more data on the impact of the solar air heater in the Cookhouse. My previous report was from a short duration observation. Now I have a year’s worth of data which appears to show April, May & June readings with less consumption than heating degree days would predict. Since the structure is still unoccupied the only energy use is for heating. Goals for 2015 are getting hot water preheating going in the Cookhouse and in our house. This data are also encouraging me to develop solar air heating to supplement in the barn.

849 electric usage

Electric heating in the Cookhouse for 2014

Badge System Design

July 21, 2011

POST-it Notes from P2PU Badges Mtg II (July 18-19)

In the agenda building process, Post-it notes were grouped by the participants into clusters and the clusters given titles. Two of those clusters are reproduced here as they may shed light on design questions or framework ideas for an upcoming white paper funded in part by Hewlett Foundation via P2PU
Nils Peterson editoral comments made while posting these notes are show in [ ]
  • How to learn from mistakes of educatational games. Build on theoritical framework instead of trying things in a hit or miss fashion
  • What are the types of “power ups” that getting badges can unlock? ie. teach a topic
  • Define the informal learning space where badges can play a role for identity, process, participation, achievement, etc.
  • Foreground the educational outcomes over the technical whizbangs.
  • How can others learn from your badges?
  • What can badges tell us about who we want to be (model identities)?
  • How do I see patterns in other people’s careers [badge collections]? How do I learn from that?
  • Is the assessment [criteria] public?
  • Is displaying evidence for obtaining a badge is optional? [seeing the evidence could be useful to other learners]
  • [Badges should be] Pedagogically agnostic: but can there be values? [Possible values might be:] Language and culture, building the tools, and building the community.
  • Are there different badge considerations for different ages? How can one sytem support life-long learning?
  • Are we scoping badges just in the learning and EDU context?
This heading was also the topic of a breakout session. The original post-its were augmented with new ones and organized into a structure
Guiding Questions:
  • What makes a good badge system?
  • How do you know if you have a good badge system?
Responses were classified into 4 groups. Group #1 was giving higher weighting
Group #1
  • Learning objectives are being met
  • How are assessment criteria made public?
  • Do peers learn from doing assessments?
  • Does system record what learner is =NOT= good at doing?
  • What to do with “failed” applications for badges
Group #2
  • Is the system used for long periods in [the learner’s] life?
  • Does user advertise their badges in Facebook, etc?
  • Do learners participate voluntarily?
Group #3
  • Does the system have a user community?
  • Does it have learners using it?
  • Does it have robust assessors?
  • Is awarding of badges automatic or does it require human judgement?
  • Why will peers assess each other well? [assumes system facilitates peer assessment]
  • Are badges better to mark a learning process completed or an assessment passed?
  • [Does the Community reflect on the utility of the assessments?]
Group #4
  • Has robust assessment instruments/ criteria
  • How to assess the system without distrubing it == Portfolio==

Assessment Specialist Job Application

May 29, 2011

I am applying for the Assessment Specialist position at Peer to Peer University (P2PU) because it looks like a next logical place for me to continue hacking education. Logical because I’ve been exploring peer learning facilitated by the Internet for 15 years. With the addition of assessment tools P2PU appears to be well poised as a ‘disruptive innovation‘ that can credential learning that is already happening.

I believe I fit their profile for the position:

I have been hacking education since 1984 (See CV (pdf), Papers 2-19 ). Initially I developed software simulations for teaching, and in 1986 co-founded the BioQUEST consortium with a manifesto about how real learning happens in the sciences. I think that same approach applies to learning about programming — one needs to start with a real problem, solve it, and then communicate the utility of the solution.

In the 90’s I explored the web and how it might be used for collaboration. With collaborators I explored an online science fair (CV, Online Educational Events section) and the creation and administration of a virtual school to connect pre-service K-12 teachers with children in classrooms (CV, Abstracts 15-20 & 22-23 & 25). In conjunction with my teaching assignment at the time I also explored automated assessment of student writing (CV, Abstracts 21 and 24).

In the first decade of this century my focus shifted from K-12 to higher education. As Assistant Director of Washington State University’s Center for Teaching Learning and Technology (CTLT), I led explorations of the online learning management system (LMS) the University was creating and collaborated on pilots to move away from the monolithic LMS toward personal learning environments (CV, Articles 26-28 and Abstracts 26,27 and blog posts). Toward the end of my time at WSU our focus shifted to assessment and methods for gathering authentic assessment from stakeholders (CV, Article 29). Much of our latter work on these topics was blogged here and here rather than published in traditional media. We explored creating portfolios using Microsoft SharePoint, which expanded my 1988 BioQUEST thinking about solving problems to include the social learning aspects of working in public. By mid-2008 we were exploring mashing up assessment tools in SharePoint and other platforms, in a concept we called the Harvesting Gradebook. By the end of the decade the organization’s name and mission were refocused on developing a university-wide system assessment of assessment. What we implemented for the University’s 2009 Accreditation Report is described here, and our vision for the full concept, from Harvesting Gradebook to University Accreditation is here.

I bring experience mashing up assessment of both 21st century skills and technical skills into the authentic public contexts where learning is happening. Our work on WSU’s system of accreditation required that we help programs capture evidence of student learning acceptable to their stakeholders, including professional accrediting bodies. Professional accreditation, for example ABET for Engineering, include a set of “soft” professional skills along with “hard” domain knowledge skills. We developed tools and assessment procedures to help programs document student learning in both these skill sets.

In addition to the work experiences above, further evidence of my interest in hacking education can be found in my commitment to launch a public charter school (Palouse Prairie School). For years CTLT worked with faculty, advocating contextualized learning activities with authentic assessment. Individual faculty would buy in and succeed with the idea. But circumstances always intervened preventing the innovations from becoming established. Mostly these circumstances were systemic, the University’s tenure and promotion criteria, changes in leadership, lack of program-wide adoption of the ideas, resistance by students to something new. We also tried working directly with students, advocating electronic portfolios as places they could work on learning activities and showcase (for purposes of assessment) both their process and product. But students were trapped in the same context as faculty, a system with a reward structure not aligned with our vision of learning.

Palouse Prairie School was intended from the start to be a systemic effort at an alternative. The school uses the Expeditionary Learning model, derived from the ideas guiding Outward Bound.  The model is exemplified by project-based “Learning Expeditions,” where students engage in interdisciplinary, in-depth study of compelling topics, in groups and in their community, with assessment coming through cumulative products, public presentations, and portfolios. The school has just completed its second year, growing and still developing its implementation of the EL model. I serve on the Board, but without a teaching credential, have no other formal role in the school.

I can bring a working and pragmatic knowledge of assessment practices to P2PU, focused on gathering practical evidence and applying it to direct change. In my mind, small scale, quick, and sufficiently useful assessment beats ponderous activities that do not deliver timely results to learners or stakeholders, or in ways framed to their needs.

In my last year at WSU we began envisioning the radar graphs created by the Harvesting Gradebook as a sort of badge — both evidence of participation and as a way to asserting levels of competence in a multidimensional assessment. We came to understand that if our tools were re-implemented as widgets that could be embedded by learners in pages, communities could develop around pages. Google search could assist this, in much the same way that searches can be filtered for Creative Commons license, we imagined them being filtered for pages with Harvesting Gradebook badges, and even badges demonstrating a certain level of competency. That work came to an end with the University’s reorganization of the unit and departure of key personnel.

I have some knowledge of game mechanics. Since the 90’s I’ve dabbled in learning analytics and have some sense of the utility and limitations of primary trait methods vs. more holistic approaches.

I have worked as a programmer since the 90’s but for the last 10 years have been the technical manager for a teams that maintained and developed large scale web-based applications, including the online survey tool that WSU used to implement the Harvesting Gradebook. I have sufficient background in coding and web development to establish credibility with a development team and collaborate to create functional designs and lead implementation of assessment in the P2PU platform. In fact, I believe that I could facilitate pieces of this work being done in SoW courses by teams of students.

I have strong independent project management skills, working on grants and other projects since the middle 80’s.

I am able to travel for conferences, meet-ups and presentations, but will need to renew my passport, which has lapsed.


Live Assessment

September 8, 2010

This is the page of feedback from my P3 presentation. Use this form ( ) to give more feedback.  The PowerPoint from the session is here on SlideShare.

Radar Chart of Audience Assessment on Learner’s Criteria
Tag Cloud of Other Important Perspectives to consider assessing this work

Additional Comments
I have not yet digested the data from the session. Soon hopefully.

Notes on “tag clouding” Twitter

August 13, 2010

I’m working on my HASTAC/P3 presentation. I want a back channel where the audience can provide feedback/ assessment of the session. The idea is to see if the audience can give feedback with a combination of a controlled vocabulary and free tagging. (As opposed to using a big rubric.)

I looked at a couple Twitter-centric tools with the thought that the audience can readily come prepared to Tweet from a range of mobile devices. What is needed is a cloud of the tweets @UserID and some coaching for the audience to tweet with tags. embedded in their web page. I used @nilspeterson as a search and it says there isn’t a cloud. will get the hashtags from a user ID. UserID nilspeterson worked, This is getting the content that the user tweets, not what is tweeted @UserID.

So to get around the above problem, you need the RSS of the tweets @UserID and that is protected by the Twitter user’s password. Yahoo Pipes can retrieve the @UserID content by passing in the required authentication. You need to embed username:password in the URL used in Pipes. (not totally secure, but workable).  Pipes will do a reasonable job filtering tweets –for example, I can get them for a date range. Here is the pipe I’ve created for user nilspeterson will take the RSS from Yahoo and makes a handsome display (below). The @UserID comes thru big (duh!), but this might not be a problem — it documents who is getting the feedback. The StopList is hardwired and can’t have any additional words added, so blocking the @UserID would need to happen in Pipes. Wordle requires using it on their page (a setup issue and no embed), they say You may not copy or redistribute the Wordle applet itself under any circumstances. Refreshing the page is a pain and not practical. Need another tool that can imbed.

IBM ManyEyes won’t work because you need to upload a static dataset to them. can create an embeddable Flash from the feed. It makes a pretty handsome cloud, and in you can link from words in the cloud to web pages., but they process the tags in the URL once so the resulting cloud its static (no auto updates unless you do it on their site).

Diverse Group Tag Cloud (DGTC) is a WordPress plug in. Its not certified in version the version 3.x of WP used by First attempt with it does not seem to work.

Candidates Will take the RSS output from Yahoo. It has a customizable stop list, which will be needed to prune the junk from Yahoo (if I can’t get Yahoo Pipes to do the pruning). Takes awhile to get a personal stop list to show up in the pick list on the site. Image below is unfiltered by a stop list to show the problems. There is an embed HTML option, which would allow getting the cloud off their page — I assume it updates when the page loads. This is fairly promising.

Google Docs spreadsheet. In the top cell put the function =ImportFeed(“”).  Then need to use Google’s word cloud gadget to make the rendering and publish the gadget and display on a web page (see below). Need this to refresh on a regular basis.

Alternative (non-Twitter) Method
An alternative would be to skip Twitter and use a Google Docs form. This avoids the need for Yahoo and for stop lists. It would still work with many mobile devices.

Whats up with Google Docs?
Google is moving to a new version of Spreadsheet. The new version does not support Gadgets (even Google’s own). The old version does, but its flaky. For today, the focus needs to be on the non-Google solutions.

Google Workaround
So, what about using Google Forms to fill a spreadsheet, publish it, take Yahoo Pipes to pick it up and feed it to TagCrowd? That seems like a reasonable next experiment.

A Waterloo for Publishing or for the University?

June 25, 2010

Cathy Davidson raised a series of issues in her reaction to a lawsuit known as Cambridge University Press, et al. v. Patton et al.

“My larger point?  We are in a confusing and damned-if-you-do-damned-if-you-don’t moment for publishing.  Scholarly publishing loses money.  Scholars who do not publish (at present) lose careers.  How do we balance these complex and intertwined issues in a sane way?  That is our question.”

Jim Groom has some thoughts on one aspect of this question — the issue of credit, or reputation, generated by journal publication:

“And, often times, but not always, that class [of author] is accompanied by three letters after their name and a long list of publications in similar journals which often, but not always, gives them entrè into the journal in the first place. Is this necessarily bad? No. Does it help certain ideas circulate to a particular audience? Yes. Are we putting too much power in the hands of these journals by reacting this way to the idea of credit? Absolutely.”

And as a result of highly valuing publishing in journals, we have created a system that is producing an avalanche of low-quality research.

Cathy’s question makes me think of the work of physicist A. Garrett Lisi, who is working outside the traditional academe system and who’s practice gave me insight to understand other ways of thinking about credit/reputation and also about gathering feedback for learning from a community:

“Lisi is developing social and intellectual capital by his strategy of working in public, and has posted a “pre-print” of some of his work in the highly visible High Energy Physics – Theory section of arXiv entitled ‘An Exceptionally Simple Theory of Everything.’

“The Wikipedia entry on Lisi’s paper gives a picture of how the work has generated social capital and become a focus of theoretical debate. The paper has been accumulating peer reviews (in the form of blog posts) and a number of citations including in refereed Physics journals as well as comments on the social news website”

So, I think Cathy is pointing us to a multi-faced conversation about moving beyond the University (see John Seely Brown or Charles Ledbetter or Clay Shirkey) each of whom is exploring forces that I think will probably address Cathy’s “damned-if-you-do-damned-if-you-don’t moment” by rendering traditional publishers in academe irrelevant.

In her post Cathy says

“Shouldn’t we be teaching the genre [scholarly monograph] to our undergraduates (because we believe it is intrinsically worthy enough to determine someone’s career in the academy) as an estimable form? … [If we] require at least one scholarly monograph in every English class, … we show respect to the genre we say that we live by and we give back something to the publishers who, right now, are expected to publish our work but who experience abysmal sales of it.”

Here, I think Cathy’s comment brings academic publishing into the national conversation about university accountability to stakeholders (the students and those investing in them). Molly Corbett Broad wrote in the Chronicle about the political landscape for accreditation and accountability “The administration has already indicated a willingness to take action when it believes that higher-education institutions are not adequately serving students’ interests.” (alas it is “premium content” that you may not be able to access) I think Corbett and Shirkey are talking about forces that may render more than just traditional academic publishing irrelevant.

It strikes me that the scholarly monograph, as a discipline for the mind, could be useful, but it might not be a form “worth studying in every English class.” It might be more useful for students to be developing skills in peer-to-peer pedagogies, based in forms like blogs and wikis, that operate in a context of information abundance rather than to be studying a form based on information scarcity and expensive publication; a form that will not be used by most students in their future careers.

Why do I focus on credit/reputation and legitimate peripheral participation rather than the academic monograph in a conversation about accountability for learning outcomes? Because, I think discovering conversations, contributing and getting feedback are important aspects of peer-to-peer learning beyond the university. Good feedback is a tool for growth, both for the author and for the community of lurkers (see John Seely Brown on legitimate peripheral participation.)

As to Cambridge University Press, et al. v. Patton et al., I think it will be a passing blip, swept away by much larger forces transforming learning.

PS. And thinking about feedback and peer-to-peer learning is why I’m posting this in my blog  (  )  and then cross-posting it as a comment in Cathy’s blog at HASTAC. HASTAC’s blogs do not appear to support Trackback, so  I can’t comment to Cathy in my blog, and consequently I need to post a comment in hers. Which means I need to create a HASTAC identity (see these objections to creating accounts everywhere). Further, a HASTAC comment does not track back to the people I cite – making it even harder for them to discover and join the conversation.

I missed the bus – thoughts on indirect assessment

March 9, 2010

I missed the bus to work today. I knew time was tight as I was going out the door. As I went along, I gained confidence I would make it, because I saw one of the school buses that I usually meet. And some kids waiting for another school bus (I usually see two school buses). And I saw the city buses at the bus transfer point (but I could not see my bus meeting them).

ALAS, none of that indirect evidence measured where my bus was on its route. As I rounded the Kibbie Dome I saw my bus picking up a rider at my stop and heading away.

I’m noting this as part of the conversation I’m part of about direct and indirect evidence of student learning outcomes. This is an example of the failure of relying on indirect evidence.

Different conversations about what is important

July 16, 2009

In IJ-SoTL – A Method for Collaboratively Developing and Validating a Rubric Allen and Knight discuss their experience validating a rubric with two groups: faculty and industry professionals. They report:

Faculty weights differed markedly from the professionals’ results [see table 4 in the article]. Faculty considered category 2 (In the headline/lead combination, Is the message clear and compelling?) and category 4 (Does the news release use a convincing journalistic style?) the most important. Categories 1 and 5 received the lowest weighting.

We have seen similar results in a course where we asked a group of faculty and a group of industry professionals to rate student work and also to rate the assignment used to assess the student work.

In both examples, the faculty seem more focused on formalisms and the professionals on the aspects of the task that lead to practical success.

Common Reading & Open Learning Communities

May 28, 2009

Thanks Bill Marler for your offer to support Washington State University’s Common Reading program after it got caught in a recent controversy regarding the book Omnivore’s Dilemma. See also developing Facebook action related to the topic.

From his blog, I can tell Marler has some appreciation of Web 2.0 as a life-long collaboration and learning strategy. This whole event is an example of how having a curriculum open to community review can improve learning outcomes. Searching in Google for “WSU Common Reading” shows that the event lit up a problem-solving community with multiple perspectives but overlapping interests in this topic; a community that produced the resources to sustain a learning opportunity.

WSU’s Center for Teaching Learning and Technology has been exploring how to help students learn in, from, and with such communities with projects like the Microsoft co-funded ePortfolio Contest. A variety of lessons can be learned from that project, including thoughts on how to transform the traditional gradebook by extending the idea of grading out into the community and making it a process for collecting community feedback on student work, AND the assignments that created the work, AND the program goals that shaped the assignments. I think this represents the way WSU needs to move forward with a Global Campus concept.

A lesson in driving up readership

May 5, 2009

On Friday, April 24 the Chronicle’s Wired Campus ran an item on the failure of U. of Michigan’s Online Teaching-Evaluation System. The article was hot news because of the scale of the player and the scale of the failure. I posted this comment near midnight Sunday, April 26:

My comment on the article

This drove a large spike in readership of the associated resources on April 27.

Page views for WSUCTLT blog

And examining how readers got to the site we see they came from several related pages in the Wired Campus article.

pages that referred to WSUCTLT

which brought readers to these pages

pages viewed as a result of the comment