Upcoming LINCS Discussion with experts in education applications of Artificial Intelligence (AI), Virtual Reality (VR) and Augmented Reality (AR)

Hello Integrating Technology  and Program Management Colleagues,

During the week of October 14th the LINCS Integrating Technology and Program Management groups will host a weeklong asynchronous discussion with a cutting- edge panel of experts in education applications of Artificial Intelligence (AI), Virtual Reality (VR) and Augmented Reality (AR).  An important professional development feature of LINCS is to keep you current with innovations in the adult basic skills field. K-12 education and higher education have been exploring teaching and learning applications of Artificial Intelligence, Virtual Reality and Augmented Reality for several years, although this is relatively new to adult basic skills education practitioners.  This is an opportunity for LINCS members, and others who may be interested, to learn about and discuss examples of innovations in AI, VR and AR that have been specifically designed for adult learners, or that have been designed for K-12 students, and have promise for adult basic skills learners.  Please mark your calendars for the week of October 14th and plan to join us for this great opportunity to explore AI, VR and AR.

The expert panel includes:

  • Art Graesser, PhD. Department of Psychology and Institute of Intelligent Systems, University of Memphis.  Dr. Graesser is a professor in the Department of Psychology and the Institute of Intelligent Systems at the University of Memphis and is an Honorary Research Fellow in the Department of Education at the University of Oxford.  Art Graesser and his University of Memphis colleagues, working with the Center for the Study of Adult Literacy at Georgia State University, have developed and tested the use of AutoTutor, an intelligent tutoring system that holds conversations with adult learners in natural language.
  • Susan Gaer. An emeritus professor of English, specializing in ESL at Santa Anna College in Southern California, Susan Gaer has been a subject matter expert and technology integration professional development specialist for OTAN, California’s statewide adult basic skills technology professional development organization. She is a partner of World Education’s Education Technology Center, and is also President Elect of the California Association of Teachers to Speakers of Other Languages (CATESOL). She has focused on using VR with ESL students from beginning to advanced levels.
  • Cliff Archey.  As Senior Education Program Manager for IBM Corporate Social Responsibility, in his current role Cliff is the Offering Manager for Teacher Advisor With Watson, managing the strategic direction and implementation of this free AI-enhanced planning tool for teachers.Johan
  • Johan E. Uvin, Ph.D. As President, Institute for Educational Leadership, Dr. Uvin’s work in the field of adult basic skills education, including ESOL/ESL includes positions as an ESOL teacher and program administrator in Boston, an associate state director of adult education in Massachusetts, a state director of adult education in Rhode Island, and as Assistant Secretary of Education in the U.S. Department of Education. He first engaged in Virtual Reality work when he represented the federal government on a Virtual and Augmented Reality Summit where he promoted the use of VR for training and development purposes. He subsequently provided oversight to the EdSim Challenge. Most recently, he has been working with Oculus to expand VR applications in the education sector, particularly focused on creating access to hardware and applications in communities where children, youth, and adults never get to access these new emerging technologies due to lack of resources.
  • Robert Murphy, Ph. D. is a Senior Policy Researcher at the RAND Corporation. Before joining RAND, Dr. Murphy was the director of evaluation research for SRI International’s Center for Technology in Learning where he was the Principle Investigator for the Technologies for Adult Basic Literacies Evaluation (TABLE) study. He was a panelist in the 2016 LINCS Discussion, Recent Research on Technology and Adult Basic Skills. Dr. Murphy’s research focuses on research and evaluation of innovative educational and workforce training programs and technologies. He is author of Artificial Intelligence Applications to Support K–12 Teachers and Teaching, A Review of Promising Applications, Opportunities, and Challenges.

I will be be posting more about this discussion in the next few weeks. Please share this announcement with colleagues who may be interested.

David J. Rosen, Moderator

LINCS CoP Integrating Technology and Program Management groups

Comments

While doing research on civic+digital engagement, I came across a VR application for adult learners that I thought might also interest members curious about this upcoming discussion.  Below is a description of the project that inspired this new VR learning space, and a link to an interview with the graduate student responsible for creating it.  

The U.S. has the world’s highest adult incarceration rate, with over 2.2 million people currently behind bars.  Research suggests that formerly incarcerated women experience heightened anxiety upon reentry and are not prepared for navigating daily encounters.  The Massachusetts Department of Correction partnered with the Engagement Lab at Emerson College in Boston, to use Virtual Reality (VR) technologies to expose inmates to simulated reentry scenarios, prior to their actual release date.

In a VR set up, sensory information is delivered through a head-mounted display that is able to track natural head movements, creating a convincing immersive experience. The Lab partnered with the Massachusetts Department of Correction center, South Middlesex Correctional Center (SMCC) to engage a group of stakeholders prior to the design of the VR curriculum to understand the circumstances and experiences that lead to anxiety during reentry. 

Two core research questions anchored this project:

1.  What are the main points of anxiety among recently released inmates that lead to recidivism?

2. How might a VR experience be designed to alleviate those anxieties?

The objective of this project was to design a pre-release curriculum using VR technology. The design process was participatory from beginning to end, inviting inmates and those recently released to provide substantive input into the direction of the project. You can read more about that process as part of the interview with developer, Melissa Teng.

I look forward to learning about more VR applications for adult learners as part of this discussion.

Best,

Mike Cruse

Career Pathways and Disabilities and Equitable Outcomes Moderator

michaelcruse74@gmail.com

Hello Colleagues,

Our discussion on AI, VR, and AR will officially begin Monday October 14th, although -- as Monday is a holiday for some of our panelists, and some members of the LINCS Integrating Technology and Program Management groups -- they may not join us until Tuesday, October 15th.

Panelists, If you wish, please add to the brief introductions I posted by replying to this comment with more information about yourself and with suggested readings for the discussion.

LINCS members who wish to join this discussion,  I hope you will read the short paper by Dr. Robert Murphy -- "Perspective Artificial Intelligence Applications to Support K-12 Teachers and Teaching A Review of Promising Applications, Opportunities, and Challenges,"  https://bit.ly/2oTX1zj, -- and then begin to post questions for our panelists about the use of Artificial Intelligence in Education. On Tuesday, and Wednesday, please also post your questions about Virtual Reality and Augmented Reality.

On Monday, I will also post some questions for our panelists, although some panelists may not join in the discussion until Tuesday.

David J. Rosen, Moderator

LINCS CoP Integrating Technology and Program Management groups

 

Colleagues, I wanted to share this article from a magazine called Training. Their most recent edition features the article AI or Just Sci-Fi (https://trainingmag.com/trgmag-article/ai-or-just-sci-fi/) The article features how AI can impact training and workforce development. I'm looking forward to this discussion. Kathy

Thanks Kathy for sharing the article "AI or Just Sci-Fi " (https://trainingmag.com/trgmag-article/ai-or-just-sci-fi/). It reminds us of some of the ways that artificial intelligence is growing in business, including in education and training applications.

Our panelists, and others, might want to react to some of these excerpts from the article:

1. A great benefit of AI-assisted training is the facilitation of individualized learning plans, says Elliot Dinkin, president and CEO at Cowden Associates, Inc. “Certainly, one-on-one tutoring is effective for personalized learning, but highly impractical and cost-prohibitive at scale,” he says. “There are solutions that use AI to train and rely upon a process that matches how each individual person learns and then adapts the needs to the experience and skill levels of each learner. This way, the solutions focus only on what people need to learn, and skip what they’ve already mastered. This is beneficial, as it will cut training time and boost knowledge and skill acquisition while also building self-awareness.”

2. AI can help employees themselves, with support from trainers, to find the career paths that are best for them and their organization, says Mike Hendrickson, vice president, Tech and Dev Products for Skillsoft. “Using data to intelligently inform learners where their aptitude and current skill set could be best utilized will help people re-skill, up-skill, or pre-skill new roles and opportunities in their organization,” he says. “I think there is great promise for remedial suggestions and accurate assessment of a learner’s struggles where we can pinpoint learning assets to help bridge the skill gap.”

3. The technology might even help ensure the quality of the learning content being delivered, so each employee is assured of receiving the right program to meet his or her needs. “AI could help make sure objectives are clear, not biased, and are measuring what is supposed to be taught. Kind of a balance and veracity check on the delivery of content to a learner,” says Hendrickson. “But more than that, we could use AI to figure out if the instruction is just to meet objectives, or if the instructor is truly teaching the subject in a way all learners understand.”

4. People need and have a desire for human interaction, and that doesn’t stop with the technology they use daily. Chatbots can grasp the nuances of human-like interactions by learning the user’s actions directly through an active listening interface. This provides a natural and human-like communication that engages the user in deeper, individualized, and personalized conversations. At the same time, chatbots extract data and insights directly from employees that the organization needs to ensure it is addressing the concerns and wants of its workforce.

David J. Rosen, Moderator

LINCS CoP Integrating Technology and Program Management groups

 

During the last 25 years I have been developing intelligent tutoring systems (ITS) with conversational agents.  These agent-based ITS help people learn by holding a one-on-one conversation with students.  I am uploading a 2018 Handbook chapter that I wrote to give a glimpse of this line of research.

Graesser, A.C., Hu, X., & Sottilare, R. (2018).  Intelligent tutoring systems.  In F. Fischer, C. E. Hmelo-Silver, S. R. Goldman, and P. Reimann (Eds.), International handbook of the learning sciences (pp. 246-255).  New York: Routledge. 

One system that my team developed is AutoTutor for struggling adult readers.  The goal is to teach them comprehension skills (over 30 modules on different strategies).  Take a look at read.autotutor.org and try out AutoTutor.

Other systems we have recently developed are ElectronixTutor and PAL3 (Personal Assistant for Lifelong Learning) for the Navy.  I have served as past president for the Artificial Intelligence in Education (an international society) and also with the Army on the Generalized Intelligent Framework for Tutoring (GIFT, www.gifttutoring.org),

 

 

 

 

Hello, Art. I just tried one of the sample AutoTutor lessons, the one on Word Parts - Affexes. I liked the animation with voice as well as the presence of a digital student that made errors. While testing things out I had the following thoughts:

1. Students are often clock watchers in classes when the student believes that proficiency or engagement in the topic really does not have much impact on when things progress. As I was testing how the system reacted to multiple wrong responses, it appeared as if the progress bar continued to move along at the same pace as if I were getting every question right. About half way into the lesson, I was wondering how frustrated some of my students might get knowing that they are just stuck in a script. Are there examples that adapt to consistent errors/failures or ones that adapt to consistent success? Do they adapt by simply adding more time (with same instructional responses) when struggling, or simply truncating the number of practice problems when finding success?

2. As the world starts to explore automated learning options, there is some concern about accuracy and how errors are addressed and discussed. In some systems, there are forums where people can pose questions about content and it's accuracy and from those forum discussions the developers are given a heads up on potential fixes needed. In my experimenting, I found a couple of issues that I imagine might distract some learners. On one of the game questions, the problem was a fill in the blank ..."As the tropical storm is becom______ a hurricane, ..." While Becoming is listed as a response, it is unfortunately listed twice with each one having a different rational. One of the responses correctly had "BECOMING - tells the reader that the event is going on now" while the other had "BECOMING - tells the reader how a tropical storm becomes a hurricane."  The digital voices did offer directions of choose the best response and rational, but if a learner missed that one time audio instruction frustration can happen quickly. Especially because there were no written instructions anywhere so that when the student starts thinking, "Why are there two Becommings?" a review of the written instructions might help them realize the rational is the differentiation between the two entries. Also in the game, there seems to be a bug in the scoring system. If I guess a correct answer and then the computer student also guesses the correct answer, I will get a point, but the computer player will not. This gets frustrating when the student finds that if the computer player gets it correct and I get it correct as well we both get the answer correct.. For those students that obsess over scores, dependencies in how those scores are dished out are often focal points for the student. This all got me thinking about how you and your team offer up feedback loops for users? Do you find different feedback options more often used? Are particular types of feedback more helpful for making a better experience?

3. When I purposely was trying to get some answers wrong, I noted that the correction did not help me figure out why my selection was incorrect. For example in finding the root of "Detecting" (I think that was the word ), I chose "tect" as the root because I was thinking "De" is often a prefix in words like "Declassify" or "Decommission". The correct answer given was "Detect" as the root and it was frustrating that I did not receive guidance or feedback that would address my thoughts about "De" being a prefix. This got me thinking about adaption as it relates to feedback. We can have all sorts of branching arcs for instruction, but how is AI going to detect and adapt to feedback, especially as strings of failure or success are accumulated?

Thanks for sharing the link to the resource, it got me thinking of a few needs and thoughts as we progress into a more AI driven world. 

Thanks so much for your thoughts on improving AutoTutor.  As with all software, the systems are never perfect and we could always use more feedback.  We have had many rounds of testing, feedback from expert panels, elimination or improvement of items after looking at performance profiles, and tests of the 30 lessons in Atlanta and Toronto with struggling adult learners (N-=253).  However, improvements can always be made.  Let me respond to a few of your specific points.

1. The progress bars were intentionally sensitive to progress through the materials rather than progress on accuracy.  We did that intentionally.  Struggling adult readers have low confidence and are prone to give up so we wanted to make sure they had the sense they were making progress.  In a different system (ElectronixTutor) the Navy sailors are very knowledgeable and competitive so we use a progress bar on accuracy. As a note, the struggling adult readers have positive impressions of AutoTutor (frustration was rare).  In contrast, the Navy sailors do get frustrated periodically. Regarding time on the lesson, the total amount of texts and items are similar for different ability levels in most of the lessons; the more skilled readers get more difficult material in the second half  of the lesson.  At the end, there is a message on whether they passed or they should practice the lesson again.

2.  Regarding feedback to the learners, the 30 lessons have different types of feedback because we wanted to explore what sort of lesson templates work best for the adult reader population.  Some lessons have a lot of qualitative feedback, others less feedback but a game format, others the human trying to help a peer agent in need, others have texts that vary in difficulty, and so on.  We have performed statistical analyses on lessons, texts, and question-answer items to explore what items are high versus low quality (and continue on these analyses).  By the way, if both the adult and the peer agent get the correct answer, both should get credit (we will take a look at that).

Thanks for the feedback.  These systems evolve over time.

 

 

 

 

Hello Colleagues,

The discussion on Artificial Intelligence, Virtual Reality and Augmented Reality has begun -- it has been moved, however, from this announcement format (bulletin) to a discussion at

https://community.lincs.ed.gov/discussion/lincs-discussion-education-applications-artificial-intelligence-ai-virtual-reality-vr-and

Please join the discussion there to read and post your comments and questions. The discussion will take place throughout the week of October 14th, so check back daily.

David J. Rosen, Moderator

LINCS CoP Integrating Technology and Program Management groups

Hi everyone! My name is Susan Gaer. I have used augmented reality (QR Codes) and Virtual reality with embedded 360 photos with my ESL students' for a few years. You can read an article I wrote on this at https://edtech.worlded.org/getting-started-with-virtual-reality-in-adult-education/ Students really enjoy lessons that incorporate VR because the picture puts them inside the action.