Data drift and trust issues in requirements for AI systems

  • 40 min

Given the increasingly widespread implementation of machine learning algorithms in areas impacting health, finance, and human safety, it is crucial to understand the requirements that should be placed on trusted AI systems. In this presentation, we will attempt to define what trust means and how requirements can be formulated, as well as explore the complexities involved in creating such systems. We will examine the consequences of requirements violations to the trust, using examples from medicine, security management, recommendation systems, and scientific research.

Comments ({{Comments.length}} )
  • {{comment.AuthorFullName}}
    {{ comment.DateCreated | date: 'dd.MM.yyyy' }}

To leave a feedback you need to

Chat with us, we are online!