San Antonio, Texas
June 10, 2012
June 10, 2012
June 13, 2012
2153-5965
Computing & Information Technology
9
25.177.1 - 25.177.9
10.18260/1-2--20937
https://peer.asee.org/20937
513
Erin Shaw is a Computer Scientist at the Information Sciences Institute at the University of Southern California’s Viterbi School of Engineering. Her research focuses on modeling and assessing student knowledge in the areas of science and mathematics, and experimenting with new technologies for aiding assessment in distance learning. As a Co-principal Investigator on National Science Foundation-sponsored studies, she researches new ways to assess student collaboration in undergraduate engineering courses and new ways to motivate secondary mathematics learning in the context of computer game-making. Shaw was formally a Software Engineer in the field of computer graphics and taught math and science as a Peace Corps volunteer in Nepal. She has a bachelor's degree in mathematics from Fitchburg State University and a master's degree in computer graphics from Cornell University.
He received a Ph.D. at Hallym University in Korea, 2009. He currently works as
a Postdoctoral Researcher at Information Sciences Institute of the University of Southern California. His interests are educational data mining, intelligent transportation system, and multi-agent systems.
Jihie Kim is the Principal Investigator of the Intelligent Technologies for Teaching and Learning group in the USC Information Sciences Institute (http://ai.isi.edu/pedtek). She is also a Research Assistant Professor in the Computer Science Department at the University Of Southern California (USC). Kim received a Ph.D. from the USC and master’s and bachelor’s degrees from the Seoul National University. Her current interests include pedagogical discourse analysis, human-computer interaction, social network assistance, and assessment of student collaborative online activities.
An Investigation of Data Displays for Interpreting Participation in Online Discussion: Two Perspectives Authors and Affiliations Omitted for Blind ReviewObjectives-PurposesThis investigation asks the question ‘What types of data display most effectively communicateinformation about participation in online discussions?’ The aim of the study is to assess datadisplays for continuous formative assessment of online participation. We investigate the questionfrom two perspectives, that of the education researcher and that of the instructor.Perspectives-Theoretical FrameworkDespite the plethora of theoretical frameworks that have been applied to e-learning there is “littleevidence of how these models or theories are applied to effective pedagogically driven e-learning” Conole et al. (2004).Methods – Techniques – Modes of InquiryThis paper describes a part of an ongoing investigation into the development and evaluation ofnew e-learning workflows for instructional assessment. A Workflow Reporting and FeedbackSystem (WRFS) was designed to run instructor-selected workflows automatically and deliverweekly reports via email. Development was motivated by the instructors’ high interest in but lowuse of the original workflow portal. The reports are linked to online forms through whichteachers can respond to questions about the results and submit their feedback online.This case study uses a mixed method approach, including both quantitative statistics andqualitative feedback from interviews with instructors. First frequency data from an onlinediscussion board was obtained. The data corpus was updated nightly so that all displays werecurrent as well as authentic. Descriptive statistics regarding the frequency of initial posts andresponses were combined with information about the message author and the time waited for aresponse. The data was processed within an e-learning workflow system where each workflow isbased on an assessment question that the instructor would like to answer. Different graphs andtables are generated depending on the questions. The graphs are generated by R. We thenconducted and grounded theory style interviews. Descriptive statistics about message postingfrequency in a computer science course were used to produce graphs and tables. The courseinstructor was shown the results and asked the following three questions about each graph: 1. What does this graph tell you? Is it easy/difficult to interpret? 2. How might you use this information to monitor and/or assess student activities? 3. How can the display and/or results be changed to make them more meaningful?Questions were rephrased as necessary and follow up questions were asked whenever possible toclarify and especially to explore ideas beyond the results we presented, to understand how wemight develop new and better results. The interview design was based on the modern view ofgrounded theory that emphasizes understanding through interpretation (Charmez, 2010).Creswell (2007) describes the approach as social constructivism (p. 20-21), where the researchintent is on making sense of meanings that others have about the world. It is an appropriateparadigm for studying instructors’ perspectives of assessment and their interpretation ofassessment results because of the strong relationships between the instructors and their uniqueinstructional contexts.Notes from an initial interview were analyzed and new graphs were proposed. In a secondanalysis, investigators compared the new graph options and decided on a new suite. This suite(hand drawn graphs) was taken back to the instructor for a second round of feedback.Data Sources – Evidence – Objects – MaterialsIn this section, we describe three types of analysis and corresponding data displays, and how thedisplay evolved from our interpretation perspective to suit an instructor’s interpretation needs.Wait Time AnalysisForum Participation AnalysisStudent Participation AnalysisScholarly SignificanceConclusionsThe study described illustrates the challenge of using graphs, and visualizations in general, tosummarize data as part of a reporting system. There may be multiple stakeholders and thusmultiple interpretations of the results. It demonstrates how many iterations of feedback andanalysis, as well as creative effort in the area of results presentation, was necessary fordeveloping graphical results that were meaningful to a different kind of user – an instructor, asopposed to an education researcher: The difference of regard with respect to the time scale andfrequency representation (i.e., averaged versus absolute) was especially notable. Moreover, theseresults may only be meaningful to specific instructors, given the unique nature of any one course,although we expect that instructors who use similar types of course discussion boards will alsofind these results useful.AcknowledgementsReferences
Shaw, E., & Crowley, M., & Yoo, J., & Xu, H., & Kim, J. (2012, June), An Investigation of Data Displays for Interpreting Participation in Online Discussion: Two Perspectives Paper presented at 2012 ASEE Annual Conference & Exposition, San Antonio, Texas. 10.18260/1-2--20937
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2012 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015