State of the Microgroup address
Submitted by Edward Latham on April 23, 2016 - 8:49am
- 2149 Views
- 0 Likes
- 10 Comments
Good day everyone. We are just a little past the midway point of the time we have to work together on this project, so I thought it might be a good time to review what we have and don't have to this point.
Our group has:
- Established a list of categories and sub categories the field may use to quickly find relevant resources or tools
- Collected a small sampling of resources and tools that fit within our categories
- Collaborated on an evaluation form that people can use to evaluate online resources or tools
- Created a digital reporting tool that will aggregate data from the evaluations
- Created a website that can house our finished evaluations (this is new as of this posting ...see below)
Items we still need to do:
- Continue to have at least two evaluators for as many of the resources we have identified to this point.
- Continue to add resources to our list as we see the need
- Establish draft reports for each resources that has had two peer evaluations completed and allow the group to edit these drafts
- Publish completed reports for each resource in the online website
- Assemble all completed data into a final report and have the group peer edit
- Celebrate a job well done :)
As you may note in item 5 of the completed list above, I took some time to set up a webpage to help me take stock of what we have done and what remains. In this process, I found a few things that are worth noting. The website is here for your review. As always, comments and feedback are welcome, but please realize this site is simply a skeleton with place holders for the data. We will pretty that up in the following months with embellishments and refinements. For now it is a workplace.
One thing I noticed in compiling what we have so far is that we have not consistently used appropriate tags in our Diigo library. This has resulted in some resources being identified as important in Diigo, but the finished evaluation not being processed correctly. As an example, RubiStar is a wonderful tool to help teachers get set up with rubrics and David not only shared that resource, but he has taken the time to do a review (Thanks David!). In looking through our categories on the website and in our draft documents, I am unsure we really developed a place for tools like this. David accurately tagged the RubiStar with "rubrics", "assessment", "rubric" which all accurately define the tool and yet we don't have an appropriate category heading for that. I suggest we may want to add a category "Assessment" in the "Adminstration for Programs and Intructors" Category. Suggestions welcomed of course. Regardless of what we end up doing with RubiStar, this brought the thought that we may need to do a quick double check of the tags in Diigo to ensure that at least one tag for any given resource is one of our categories. Otherwise, when I go to compile data, either I or the computer algorithms will miss some very nice resources.
Another take away from summarizing our progress so far is that we have a total of 5 reviews (thank you Diana, Steve and David), that cover 4 of our resources. A good percentage of our categorized resources have not been evaluated yet. In order to have some time to create, edit and publish finished reports, we will need to collectively work on getting some more activity in evaluation. For those that have evaluated, perhaps you can quickly summarize your experience of evaluating with these tools? What time frame should people be expecting to put in on an evaluation? What takes the most time? Are there tricks or hints people should be aware of?
To help ensure people know how many evaluations were done on each resource, I will be creating a summary list that automatically updates and will put it on the webpage I created and linked above. That way, when people feel like doing an evaluation they can look at the website to see which resources already have 2 evaluations. Our goal is to have at least 2 evaluations for any resource we evaluate. We do not want more than 4 as we could use that energy reaching more resources at these early stages of the process.
May is just about to start up. The end of the year crazy schedules start in as well as nicer weather. If you can find time within life's challenges to do some evaluations in the next month, your efforts will be appreciated in helping us have enough data to report out to the field.
As always, please share your questions, thoughts, and ideas for making this work as positive and productive as we may. Although we may be a bit behind on evaluations, seeing the complied list of what we have accomplished is uplifting :) Everyone's discussions and contributions have helped to already create some wonderful outcomes as we zoom past our halfway point of the time we have on this project. Thank you all for your continued efforts!