Background and Objectives
This workshop aims to bring together leading software engineers, machine learning experts and practitioners to reflect on and discuss the challenges and implications of building software for complex Artificial Intelligence (AI) systems by using Machine Learning (ML) techniques.
The core idea behind this workshop is a growing concern that we have as software engineers in a world where data science, deep learning, and AI are becoming increasingly pervasive. The economic benefits of Machine-Learning Software Applications and artificial intelligence, in general, is forecast to surpass USD 8.81 Billion by 2022. Although AI research has allowed the development of novel algorithms capable of learning new tasks, adapting to the environment, and evolving, their implementation in software systems remains challenging. From an engineering perspective, once an algorithm is implemented, it requires a solid architecture, model/data validation, proper monitoring for changes, dedicated release engineering strategies, judicious adoption of design patterns and security checks, and thorough user experience evaluation and adjustment. All these activities require a combined knowledge in software engineering, data science, and machine learning. A failure to properly address these challenges in such complex software systems can lead to catastrophic consequences. An example of such failure is the recent human toll incidence caused by the $47-million Michigan Integrated Data Automated System (MiDAS), or the recent finding that simple tweaks can fool neural networks in identifying street signs. the Uber’s self-driving car that ran into a pedestrian even though the car’s sensors detected her presence. The software of the Uber's car which is Machine-Learning Software Applications reportedly decided not to react right away, considering the detection of the pedestrian as a "false positive."
The source of emerging difficulties is the shift of the development paradigm. Classically, we have constructed software systems in a deductive way, or by writing down the rules that govern the system behaviors as program code. With machine learning techniques, we generate such rules in an inductive way from training data. This shift does not only simply require new tools that intensively deal with data but also introduces unique characteristics. The resulting system behaviors are uncertain: black-box and unexplainable. They are intrinsically imperfect and it is practically impossible to reason their correctness in a deductive way.
Given the critical and increasing role of AI-based systems in our society it is now imperative to engage software engineers and machine learning experts in in-depth conversations about the necessary perspectives, approaches and roadmaps to address these challenges and concerns.