consolidate all data into a database for comprehensiveanalysis.CASE STUDY: GENERATED PILOT EXPERIMENTS AND DATA COLLECTION PLANThe testbed has been validated by generating three pilot experiments. These pilot experimentsare designed using parameters depicted in figures 4 and 5 for both N -back tasks and MOT taskrespectively. Each of these pilot experiments comprises of R runs, where each run contains T trialsand each trial has S sub-trials. The participant performs any given task once in every sub-trial. Inthe case of both N -back pilot experiments, we chose R = 3, T = 9 and S = 20. Furthermore,the value of N is also varied randomly between 1 and 3 (i.e. N = 1 for low workload, N = 2 formedium workload, and N = 3 for high workload) across trials to
(Figure 27) represents a male passenger who survived despite gender being a negativefactor in the model’s prediction. His high Pclass and fare ($35.5) played a crucial role inincreasing his survival probability. The cumulative SHAP value graphs (Figs. 28, 29) furtherhighlight these trends by selecting the few vital causes from the trivial many, showing thatinstance 3’s survival was driven by gender, wealth, and class, aligning with historical data where75% of women survived compared to only 19% of men. On the other hand, instance 55 survivedsolely due to his high social class, with gender contributing the least to his survival.3 Evaluation and FindingsIn this section, we evaluate the effectiveness of the DARE-AI labs through surveys conducted
flexibility engine within the LMSand adjustments to the underlying framework to facilitate adaptability in the dynamic assignmentof materials, tasks, and evaluations, utilizing a more extensive cluster model encompassing abroader spectrum of student characteristics.References[1] S. Park, “Analysis of Time-on-Task, Behavior Experiences, and Performance in Two Online Courses With Different Authentic Learning Tasks,” The International Review of Research in Open and Distributed Learning, 2017, doi: 10.19173/irrodl.v18i2.2433.[2] Z. Zen, Reflianto, Syamsuar, and F. Ariani, “Academic Achievement: The Effect of Project-Based Online Learning Method and Student Engagement,” Heliyon, 2022, doi: 10.1016/j.heliyon.2022.e11509.[3] S. B
will enable students to visually exploreand interact with muscle segmentation processes, including keypoint selection, boundarytracking, and 3D reconstruction. This hands-on approach aims to foster a deeper, more intuitiveunderstanding of the algorithm’s functionality and its practical application in real-world medicalimaging scenarios.AcknowledgmentThis project was funded in part by the Northeastern TIER 1 seed grant.References [1] J. Zhu, B. Bolsterlee, B. V. Chow, C. Cai, R. D. Herbert, Y. Song, and E. Meijering, “Deep learning methods for automatic segmentation of lower leg muscles and bones from mri scans of children with and without cerebral palsy,” NMR in Biomedicine, vol. 34, no. 12, p. e4609, 2021. [2] R. Ni, C. H. Meyer, S. S
cybersecurity and digital forensics). Further iterations of the chatbot will focus onimproving its ability to facilitate collaborative learning, assist with project-based assessments,and provide actionable feedback to students and instructors.References[1] Maderer, J. “Artificial Intelligence Course Creates AI TeachingAssistant,”https://news.gatech.edu/news/2016/05/09/artificial-intelligence-course-creates-ai-teaching-assistant, May 2016, accessed January 2025.[2] Chopra, S., Gianforte, R., and Sholar, J. “Meet Percy: The CS 221 Teaching AssistantChatbot,” ACM Transactions on Graphics, Vol. 1 (1), December 2016.[3] Lluna, A. P. “Creation and Development of an AI Teaching Assistant,” Master’s Thesis,Universitat Politecnica de Catalunya, 2017/2018.[4
described in Section 3.1. The extra credit points from SEP-CyLE were computed as apercentage ratio of the student(s) with the highest number of virtual points. The student(s) withthe most virtual points received 3% extra credit course points.The grades for each course project deliverable consist of four components: presentation (21%),demonstration (12.6%), documentation (50.4%), and peer evaluation (16%). The peer evaluationconsists of the members of a team grading each other using a peer evaluation rubric provided bythe instructor. The rubric includes four criteria: Helping - assistance provided by a team memberto other team members, Participating - contribution and attendance by a team member at teammeetings, Questioning - the level at which the
without thedominance of societal biases.References[1] T. Camp, W. R. Adrion, B. Bizot, S. Davidson, M. Hall, S. Hambrusch, E. Walker and S. Zweben, “Generation CS: The growth of computer science,” ACM Inroads, vol. 8, no. 2, pp. 44–50, May 2017. [Online]. Available: https://doi:10.1145/3084362. [Accessed Jan 14, 2025].[2] T. G. Zimmerman, D. Johnson, C. Wambsgans and A. Fuentes, “Why latino high school students select computer science as a major,” ACM Trans. Comput. Educ., vol. 11, no. 2, pp. 1–17, Jul. 2011. [Online]. Available: https://doi:10.1145/1993069.1993074. [Accessed Jan 14, 2025].[3] S. R. Roy, “Educating Chinese, Japanese, and Korean international students: Recommendations to American professors
flexible choice for applicationslike cookie classification and wildcard matching in cybersecurity.3.3.3 Flan-T5Flan-T5 is an enhanced version of the T5 model that incorporates instruction fine-tuning 15 . By training on a mixture of tasks phrasedas instructions, Flan-T5 improves its ability to follow task descriptions and generalize to new tasks. This makes Flan-T5 particularlyeffective in zero-shot and few-shot learning scenarios, where the model needs to perform well on tasks it has not explicitly beentrained on. In the context of identifying wildcard matches in cookies, Flan-T5’s improved understanding of instructions can lead tomore accurate and reliable classification results.4 Results4.1 Experimental SetupThe experimental
in education andopens the door to new opportunities for personalization and adaptability in virtual environments.Integrating advanced technologies with robust pedagogical approaches is essential to transformteaching and learning in the digital age.References[1] S. Martín, E. López-Martín, A. Moreno-Pulido, R. Meier, and M. Castro, “A Comparative Analysis of Worldwide Trends in the Use of Information and Communications Technology in Engineering Education,” Ieee Access, 2019, doi: 10.1109/access.2019.2935019.[2] O. Kuzu, “Digital Transformation in Higher Education: A Case Study on Strategic Plans,” Vysshee Obrazovanie v Rossii = Higher Education in Russia, 2020, doi: 10.31992/0869-3617-2019-29-3- 9-23.[3] B. R. Aditya
education, ultimately preparing students for a rapidly evolvingtechnological landscape.References[1] M. R. Chavez, T. S. Butler, P. Rekawek, H. Heo and W. L. Kinzler, "Chat Generative Pre-trained Transformer: why we should embrace this technology," American Journal of Obstetrics and Gynecology, vol. 228, no. 6, pp. 706-711, 2023.[2] G. Debjania and J.-B. Souppeza R. G., "Generative AI In Engineering Education," in UK and Ireland Engineering Education Research Network Annual Symposium, Belfast, 2024.[3] A. Johri, A. S. Katz, J. Qadir and A. Hingle, "Generative artificial intelligence and engineering education," Journal of Engineering Education, vol. 112, no. 3, p. 572–577, 2023.[4] D. De Silva, O. Kaynak, M. El-Ayoubi, N. Mills, D
and curriculum developers select the correct learning goals and activities for theirspecific student population.References [1] S. Isaac Flores-Alonso, N. V. M. Diaz, J. Kapphahn, et al., “Introduction to AI in under- graduate engineering education,” in 2023 IEEE Frontiers in Education Conference (FIE), College Station, TX, USA: IEEE, Oct. 18, 2023, pp. 1–4, ISBN: 9798350336429. DOI: 10.1109/FIE58773.2023.10343187. [2] S. Khorbotly, “Machine learning: An undergraduate engineering course,” in 2022 ASEE Illinois-Indiana Section Conference Proceedings, Anderson, Indiana: ASEE Conferences, Apr. 2022, p. 42 132. DOI: 10.18260/1-2--42132. [3] R. DeMara, A. Gonzalez, A. Wu, et al., “A crcd experience: Integrating machine learning
, limiting insights into how undergraduate students orthose in other disciplines might experience redesigned assessments. The short-term focus of the studyalso means that long-term impacts on learning and skill retention remain unexplored. Additionally,studies could examine the impact of redesigned assessments on instructor workload, studentengagement, and equity and accessibility, ensuring that innovative assessment practices benefit alllearners.ReferencesAnthropic. (2024). Claude [Large language model]. https://www.anthropic.com/Google. (2024). Gemini [Large language model]. https://gemini.google.com/Huang, A. Y. Q., Lu, O. H. T., & Yang, S. J. H. (2023). Effects of artificial intelligence–enabledpersonalized recommendations on learners
. For example, Scenario 3 on‘general-purpose’ AI is inspired by the EU AI Act’s requirement [28] that proprietors of ‘generalpurpose’ AI systems report details about model architecture and training processes to national AIauthorities. Each scenario, along with the proposed AI regulations that Congress can vote on, aredescribed in Appendix B. To start the game, only members of Evil Inc. are told internal companyinformation that motivates their lobbying efforts. For example: “Evil Inc.’s large language modelis only successful because its model architecture is kept a secret, so Evil Inc. should preventCongress from requiring the disclosure of any model architecture information.” Each round, EvilInc. members decide how to distribute a limited
. Communications in Computer and Information Science, H. Florez and H. Astudillo, Eds. Springer, Cham, 2025, vol. 2237, accessed: 21-Oct-2024. [Online]. Available: https://doi.org/10.1007/978-3-031-75147-9 4 [3] K. Shah, P. Lee, D. Barretto, and S. N. Liao, “A qualitative study on how students interact with quizzes and estimate confidence on their answers,” in Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1, ser. ITiCSE ’21. New York, NY, USA: Association for Computing Machinery, 2021, p. 32–38. [Online]. Available: https://doi.org/10.1145/3430665.3456377 [4] S. N. Liao, “Early identification of at-risk students and understanding their behaviors,” Ph.D. dissertation, UC San
expressed in this paper are those of the authors anddo not necessarily reflect the views of the NSF.References [1] A. Ehrmann, T. Blachowicz, G. Ehrmann, and T. Grethe, “Recent developments in phase- change memory,” Applied Research, Jun. 2022, doi: https://doi.org/10.1002/appl.202200024. [2] R. Azevedo, J. D. Davis, K. Strauss, P. Gopalan, M. Manasse, and S. Yekhanin, “Zombie memory: Extending memory lifetime by reviving dead blocks,” in Proceedings of the Inter- national Symposium on Computer Architecture (ISCA), 2013. [3] H. Luo et al., “Write Energy Reduction for PCM via Pumping Efficiency Improvement,” ACM Transactions on Storage, vol. 14, no. 3, pp. 1–21, Aug. 2018. [4] J. Fan, S. Jiang, J. Shu, Y. Zhang, and W. Zhen
objectives mapped to eachquestion. Occassionaly, if the questions were phrased in a confusing manner, thresholds werereduced. Overall, the cutoffs were consistent with traditional grading letter grade cutoffs.Programming exams contained specifications similar to lab assignments (as described below),with tasks summing up to particular letter grades.Example specifications for a lab assignmentTask 1. Padovan Sequence:[LP] Part 1: In a file called Padovan.txt, write the pseudocode to recursively compute the nthPadovan number . A padovan number is an extension to the Fibonacci series that is defined by therelation: P(n) = P(n-2) + P(n-3). P(0)=P(1)=P(2)=1. Clearly state your base case(s).[LP] Part 2: Implement the pseudocode in a function called unsigned
points, with Group 1's average being5.81 and Group 2's being 4.91. This indicates that, on average, Group 1 achieved higher gradesthan Group 2.Applying ANOVA to compare performance (final grade) between Group 1 and Group 2 yieldsan F-statistic value of 116.8963 and a p-value of approximately 7.411684e-13. The p-value isextremely low (much lower than any standard significance threshold, such as 0.05), indicating astatistically significant difference between the two groups' final grades. This suggests thatacademic performance (measured by final grade) significantly differs between Group 1 andGroup 2.Analysis of Students' Work ExperienceBoth groups belong to the evening session, mainly consisting of students working during the day.Additionally
programmingassignments," Computer Science Education, vol. 15, no. 2, pp. 83–102, 2005. doi:10.1080/08993400500150747.[2] C. Douce, D. Livingstone, and J. Orwell, "Automatic test-based assessment of programming:A review," Journal on Educational Resources in Computing (JERIC), vol. 5, no. 3, pp. 4–es,2005. doi: 10.1145/1163405.1163409.[3] P. Li and L. Toderick, "An automatic grading and feedback system for e-learning ininformation technology education," in Proc. 2015 ASEE Annu. Conf. & Expo., Seattle, WA,USA, 2015, pp. 26.179.1–26.179.11. [Online]. Available: https://peer.asee.org/23518[4] C. L. Hull, "Simple trial and error learning: A study in psychological theory," PsychologicalReview, vol. 37, no. 3, pp. 241–256, 1930. doi: 10.1037/h0073614.[5] S. H
andprofessional trajectories of CS students.References[1] D. Shah. “By the numbers: MOOCs in 2021.” Class Central. Accessed: Feb. 20, 2025. [Online.] Available: https://www.classcentral.com/report/mooc-stats-2021/[2] L. J. Sax, M. A. Kanny, T. A. Riggers-Piehl, H. Whang, and L. N. Paulson. “‘But I’m not good at math’: The changing salience of mathematical self-concept in shaping women’s and men’s STEM aspirations,” Research in Higher Education, vol. 58, no.7, pp. 763–794, Feb. 2017, doi: 10.1007/s11162-017-9450-6.[3] National Center for Education Statistics, 2020, “Completions,” Integrated Postsecondary Education Data System (IPEDS). [Online]. Available: https://nces.ed.gov/ipeds/use-the-data[4] S. Mithun and X
, 2002. [9] Houghton Mifflin. Project-based learning: Background knowledge & theory, 2003. http://www.college.hmco.com/education/pbl/background.html.[10] M. Fasli and M. Michalakopoulos. Supporting active learning through game-like exercises. In Proceedings of the 5th IEEE International Conference on Advanced Learning Technologies, pages 730–734. IEEE, 2005. doi: 10.1109/ICALT.2005.159.[11] R. Lawrence. Teaching data structures using competitive games. IEEE Transactions on Education, 47(4):459–466, 2004. doi: 10.1109/TE.2004.825053.[12] S. Lam, P. Yim, J. Law, and R. Cheung. The effects of classroom competition on achievement motivation. In Proc. of Annual Conference of the American Psychological Association, 2001.[13