Gennaro De Luca; Yinong Chen
Arizona State University, Tempe, USA
Received 9 June 2020; Revised 14 October 2020; Accepted 27 October 2020; Published online 28 December 2020
Teaching students the concepts behind computational thinking is a difficult task, often gated by the inherent difficulty of programming languages. In the classroom, teaching assistants may be required to interact with students to help them learn the material. Time spent in grading and offering feedback on assignments removes from this time to help students directly. As such, we offer a framework for developing an explainable Artificial Intelligence that performs automated analysis of student code while offering feedback and partial credit. The creation of this system is dependent on three core components. Those components are a knowledge base, a set of conditions to be analyzed, and a formal set of inference rules. In this paper, we develop such a system for our own language by employing Pi-Calculus and Hoare Logic. Our detailed system can also perform self-learning of rules. Given solution files, the system is able to extract the important aspects of the program and develop feedback that explicitly details the errors students make when they veer away from these aspects. The level of detail and expected precision can be easily modified through parameter tuning and variety in sample solutions.
Explainable AI; Pi-Calculus; VIPLE; Education