co-located with 32nd GI/ITG ARCS 2019 in Copenhagen, Denmark from May 20 to May 23, 2019 As technology becomes more autonomous and ever more integrated into our daily lives, social factors become at least as important as technical ones in its design. These socio-technical systems provide new opportunities for citizens to work together, supported by machines, to tackle pressing societal challenges. As we delegate more of our own decisions to machines, a key question that arises is how to build autonomous systems that are by-design congruent with social expectations, and their underlying human values, ethics, and norms. This requires us to explicitly design in to intelligent systems mechanisms for reflection in their social context. More specifically, machines must possess the ability to judge their own behaviour from a social perspective, to identify a social standard and compare themself against it, and to modify their behaviour accordingly. In short, this requires machines to possess social self-awareness. Recent research in self-aware computing has approached some of these capabilities, but there are still a number of fundamental challenges to be tackled, in building intelligent systems that are capable of intentionally sustaining a positive and healthy relationship with society. For example, although socio-technical systems present great opportunities, they also present great challenges because they require individuals to cooperate by contributing their time, effort and resources to a shared enterprise. In this talk I will explore this aspect of intelligent socio-technical systems, starting from the empirical observation that human societies can avoid antisocial outcomes, such as the Tragedy of the Commons, by establishing institutional rules that govern their interactions. This requires reflection and cognition concerning the social situation in which we find ourselves, in the context of our own values. However, even today’s most advanced autonomous systems do not do this, and thus large-scale delegated decision-making to them would be prone to create anti-social outcomes that are not congruent with our own values. To tackle this, an important challenge is therefore to identify runtime modelling techniques suitable for providing an autonomous system with the machinery necessary for social self-awareness. I will explore two different modelling approaches: agent-based modelling, based on executing the content of behaviours; and evolutionary game theory, where a description of the value of behaviours instead forms the basis. I will present results from both approaches, establishing a complementarity between them, in terms of their ability to generate actionable insight into social situations that intelligent systems may find themselves in. Specifically, content-based and value-based models each offer different strengths: capturing complex cognitive behaviours, and making precise predictions about how to control and make decisions within a socio-technical system,respectively. Yet each appears insufficient to provide a fully actionable picture of how to exercise autonomy within a socio-technical system. I will then place this in the broader context of current self-aware computing research. I will discuss what will be needed in order for these models to be used at run-time, in order to provide intelligent systems with the requisite social self-awareness to enable them to govern their own behaviour in line with social expectations, and ultimately, to be worthy of deeper trust when acting on our behalf. Peter Lewis is a Senior Lecturer in Computer Science at Aston University in the UK. His research is concerned with computational systems that are inspired by biological, social and psychological processes. His focus is on continuous adaptation and learning in complex agent-based systems, and he is particularly interested in the evolution of self-awareness in a social context. He has made significant contributions to the field of self-aware computing, including the foundational book Self-aware Computing Systems: An Engineering Approach, in 2016. Through ongoing industrial research collaborations, his work has been applied in areas such as smart camera networks, interactive music devices, avionics, manufacturing, and cloud computing. He is Director of the Think Beyond Data initiative, part-funded by the European Regional Development Fund, that provides an artificial intelligence R&D capability to businesses across the Midlands of England. In 2016, he also co-founded Beautiful Canoe, a student-powered social enterprise whose vision is to develop the technology leaders of the future, and delivers bespoke software for a range of clients in the public, private and third sectors. Beautiful Canoe has recently been recognised by the new national Institute of Coding, and is a model for student-powered enterprises across the UK. Complexity in Information and Communication Technology (ICT) is still increasing, driven by the growing number of devices with vast amounts of computational resources. As a result, the administration of present and future systems becomes an impossibility for central human operators. To tackle this development, modern systems are increasingly equipped with techniques and algorithms that allow them to act `intelligently’. Such Intelligent Systems (IS) strongly leverage the recent advances in Artificial Intelligence (AI) in general and of Machine Learning (ML) in particular. With this workshop, we provide a forum for presenting and discussing novel concepts, techniques and algorithms for IS as well as for related aspects such as quantification approaches to peak into the black box often involved. An IS is characterized by its autonomous learning behaviour, which includes mechanisms to relieve the human from non-trivial tasks such as hyperparameter optimisation, algorithm selection and continual monitoring in the pursuit of recognising changes, anomalies, etc. The degree of system autonomy shall be increased as far as possible by still complying with the necessary safety boundaries apparent in nearly any real-world application. As a result, the vision of initiatives such as Organic Computing and Autonomic Computing manifests itself: traditional design-time decisions are moved to run time and, thus, the systems themselves take over control. In this workshop, we solicit research and position papers that encompass the application of novel and established ML techniques. Thereby, we explicitly emphasize the aspect of interpretability and explainability of the involved algorithms to provide a basis for system transparency already at the core of its mechanisms. Besides this self-explanation property of the targeted systems, another ingredient to reach a specific level of intelligence is self-awareness and the resulting, ongoing pursuit for self-optimisation. This particular tension between system learning, optimisation and evolution, as well as the need for system explainability, validation and exploration boundaries constitutes the main motivation and unique characteristic of this workshop. Submissions are expected to focus on at least one of the following main topics: A. Autonomous Learning Behaviour in Technical Systems, e.g. Active Learning, Transfer Learning, Online Concept Drift/Shift and Novelty/Obsoleteness Detection, Reinforcement Learning from Feedback, Transductive Inference for Efficient Model Building, Self-Awareness, … B. Self-Adaptation@Runtime, e.g. Automated Algorithm Configuration & Selection, Evolutionary Computation as Mechanism to Change, Context-aware and Transient Interfaces, … C. Metrics and Quantification, e.g. for System Validation, Guaranteeing, Understanding & Trust, … Based on the currently available schedule for the ARCS conference, we expect the following schedule for SAOS: Papers should be in English and formatted according to IEEE CIS template in “conference mode” Papers should not exceed 8 pages (full paper) or 4 pages (short paper). PDF submission via EasyChair https://easychair.org/conferences/?conf=saos2019 More details depend on the requirements given by the ARCS organisers and will be provided here as soon as possible. Intelligent Systems Workshop – 7th edition in the
Series on Autonomously Learning and Optimising Systems (SAOS)
Keynote Announcement
Intelligent Socio-Technical Systems
And A Case for Socially Self-Aware MachinesDr. Peter Lewis
Aston University, UK
Abstract:
Bio:
Aims and Scope of SAOS
Important Dates
Paper submission deadlineFebruary 15, 2019
Extended submission deadlineMarch 1, 2019
Decision notification
March 20, 2019
Camera-ready version
March 27, 2019
Submission
(see http://www.ieee.org/conferences_events/conferences/publishing/templates.html).Organisation & Technical Programme Committee
Organising Committee
Anthony
Stein
Universität Augsburg (DE)
Official Website
Sven
Tomforde
Universität Kassel (DE)
Official Website
Jean
Botev
University of Luxembourg (LU)
Official Website
Peter
Lewis
Aston University (UK)
Official Website
Advisory Board
Jörg
Hähner
Universität Augsburg (DE)
Christian
Müller-Schloer
Leibniz University Hannover (DE)
Bernhard
Sick
Universität Kassel (DE)
Programme Committee (tentative)
Kirstie
Bellman
Topcy House Consulting
Jean
Botev
University of Luxembourg
Uwe
Brinkschulte
University of Frankfurt
Ada
Diaconescu
Telecom ParisTech
Lukas
Esterle
Aston University
Jörg
Hähner
University of Augsburg
Heiko
Hamann
University of Luebeck
Martin
Hoffmann
Bielefeld University of Applied Sciences
Christian
Krupitzer
University of Würzburg
Chris
Landauer
Topcy House Consulting
Peter
Lewis
Aston University
Erik
Maehle
University of Luebeck
Gero
Mühl
University of Rostock
Christian
Müller-Schloer
Leibniz University Hannover
Christian
Renner
University of Luebeck
Stefan
Rudolph
University of Augsburg
Hella
Ponsar
University of Augsburg
Wolfgang
Reif
University of Augsburg
Ingo
Scholtes
University of Zurich
Bernhard
Sick
Universität Kassel
Anthony
Stein
University of Augsburg
Claudio Juan
Tessone
University of Zurich
Sven
Tomforde
University of Kassel
Sebastian
von Mammen
University of Würzburg
Torben
Weis
University of Duisburg-Essen
Organised by the
Special Interest Group
Organic Computing within the
Gesellschaft für Informatik (GI)