TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 23432

Edge-AI: Identifying Key Enablers in Edge Intelligence

( Oct 22 – Oct 25, 2023 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/23432

Organizers

Contact

Shared Documents


Summary

Research Area

Edge computing promises to decentralize cloud applications while providing more bandwidth and reducing latency. These promises are delivered by moving application-specific computations between the cloud, the data-producing devices, and the network infrastructure components at the edges of wireless and fixed networks. Meanwhile, the current Artificial Intelligence (AI) and Machine Learning (ML) methods assume computations are conducted in a powerful computational infrastructure, such as data centres with ample computing and data storage resources available. To shed light on the fast-evolving domain that merges edge computing and AI/ML, referred to as Edge AI, the recent Dagstuhl Seminar 21342 gathered community inputs from a diverse range of experts. The results of our first iteration of the seminar were reflected in the ACM SIGCOMM CCR publication, focusing on three different angles of Edge AI: future networking, cloud computing, and AI/ML needs.

Along with three identified driving areas of 5G beyond (or so-called 6G), future cloud, and evolved AI/ML, the advancement of different technologies and the growing business interests will take Edge AI forward regarding hardware, software, service models, and data governance. Starting from the current state of play driven by cellular, cloud, and AI/ML service providers, the roadmap reflects five general phases: scalable framework, trustworthy co-design, sustainable and energy-efficient deployment, equal accessibility, and pervasive intelligent infrastructure. As changes can always occur, the sequence depicted in the roadmap could be switched or combined. Nonetheless, this Edge AI roadmap reflects the combined effects of technology enablers and non-tech demands, such as the socioeconomic transformation of user behaviors, purchasing power, and business interests.

Despite its promise and potential, Edge AI still faces major challenges in large-scale deployment, including energy optimization, trustworthiness, security, privacy, and ethical issues. As an important goal of sustainability, the energy consumption of Edge AI needs to be optimized. Energy efficiency is crucial for Edge AI embedded infrastructures (e.g., roadside units, micro base stations) to sustainably support advanced autonomous driving and Extended Reality (XR) services in the years to come. Through the pipeline of data acquisition, transfer, computation, and storage, there exists the possibility for Edge AI to trade accuracy with reduced power and less time consumed. For instance, noisy inputs from numerous sensors can be selectively processed and transferred in order to save energy.

A set of applications would be satisfied with an ‘acceptable’ accuracy instead of exact and absolutely correct results. By introducing this new dimension of accuracy to the optimization design, energy efficiency can be further improved. Concerning trustworthiness, Edge AI benefits from its close proximity to the end devices. However, due to the distributed deployment with deep insights into a personal context, the safety and perceived trustworthiness of Edge AI services are raising concerns among the stakeholders (e.g., end-users, public sectors, ISP). To achieve trustworthy Edge AI, critical building blocks are needed, including verification and validation mechanisms that ensure transparency and explainability, especially in the training and deployment of Edge AI in decentralized, uncontrolled environments. The trustworthiness of Edge AI is a stepping stone to establishing appropriate governance and regulatory framework on which the promise of Edge AI can be built.

Copyright Aaron Ding, Eyal de Lara, Schahram Dustdar, and Ella Peltonen

Motivation

Edge computing promises to decentralize cloud applications while providing more bandwidth and reducing latency. These promises are delivered by moving application-specific computations between the cloud, the data-producing devices, and the network infrastructure components at the edges of wireless and fixed networks. Meanwhile, the current Artificial Intelligence (AI) and Machine Learning (ML) methods assume computations are conducted in a powerful computational infrastructure, such as datacenters with ample computing and data storage resources available. In this Dagstuhl Seminar, we address challenges that include 1) large-scale deployment of the edge-cloud continuum, 2) energy optimization and sustainability of such large-scale AI/ML learning and modelling, and 3) trustworthiness, security, and ethical questions related to the intelligent edge-cloud continuum.

The output of this Dagstuhl Seminar will include a roadmap for a large-scale, energy-efficient, and safety- and privacy-aware Edge Intelligence. We welcome participants for discussion and give a 10-15min talk on their own perspectives on the following topics:

  1. The success of the edge-cloud continuum depends on the deployment of edge and AI-driven services as well as software-hardware DevOps. This includes novel directions on new programming paradigms, system architecture, and runtime frameworks for achieving large-scale deployment.
  2. For future intelligent embedded infrastructures (e.g., roadside units, micro base stations), it is necessary to sustainably manage the pipeline of data acquisition, transfer, computation, and storage. This includes exploration of a tradeoff between accuracy and energy consumption, applications that would be satisfied with an ‘acceptable’ accuracy instead of exact and absolutely correct results, and other new dimensions of accuracy optimization design.
  3. Due to the distributed deployment with deep insights into a personal context, the safety and perceived trustworthiness of Edge Intelligence services shall be investigated through the lens of multiple stakeholders (e.g., end-users, public sectors, ISP). This includes critical building blocks that can ensure transparency and explainability, especially in the training and deployment of Edge Intelligence in decentralized, uncontrolled environments.
  4. The interplay of Edge and AI/ML is the fourth topic to be explored in our seminar given the functional and non-functional concerns such as safety, privacy, and ethical issues.
Copyright Eyal De Lara, Aaron Ding, Schahram Dustdar, and Ella Peltonen

Participants

Related Seminars
  • Dagstuhl Seminar 21342: Identifying Key Enablers in Edge Intelligence (2021-08-22 - 2021-08-25) (Details)

Classification
  • Artificial Intelligence
  • Distributed / Parallel / and Cluster Computing
  • Networking and Internet Architecture

Keywords
  • Edge Computing
  • Cloud Computing
  • Edge Intelligence