1st International Workshop on
Fairness in Software Systems
March 4, 2025 | Co-located with SANER in Montréal, Québec, Canada

Call for Papers

Update: Authors with papers accepted in the workshop will be invited to submit an extended version of their work to an open call for a Special Issue on Software Fairness in the Information and Software Technology Journal.


In contemporary society, technology plays an important role in various domains, including work, education, politics, and leisure. Consequently, without a robust commitment to fairness in software engineering across education, research, and industry, software products may inadvertently inflict harm, particularly by marginalizing certain user groups. Today, as AI systems become more widespread, the expectation for software to represent society’s diversity is not just a technical requirement, but an ethical responsibility. In this context, software fairness has become an essential non-functional requirement and quality attribute, especially for data-driven systems.

Software fairness involves ensuring that software systems, algorithms, and their outcomes are equitable, just, and unbiased across all user demographics, regardless of race, gender, ethnicity, or socioeconomic status. In software engineering, this entails the prevention of discrimination, the promotion of equitable outcomes, and the systematic mitigation of biases throughout the design, development, deployment, and usage phases of software systems. However, despite the critical importance of these considerations, progress in advancing software fairness has been gradual.

The International Workshop on Fairness in Software Systems (Fairness’2025) is a unique event that brings together academics, industrial researchers, and practitioners to exchange experiences, solutions, and new ideas, combining both technical and societal aspects to advance the state of the art in software fairness. The topics of interest include, but are not limited to:

  • Bias in Machine Learning Models: Identifying and mitigating bias in training data; Algorithmic fairness and bias detection techniques; Case studies of bias in deployed systems.
  • Fairness in AI and Machine Learning: Definitions and metrics of fairness; Fairness-aware machine learning algorithms; Trade-offs between fairness and other performance metrics.
  • Ethical Implications of Software Fairness: Ethical considerations in AI development and deployment; Societal impact of unfair software.
  • Transparency and Accountability in Software: Explainable AI and interpretability of machine learning models; Auditing and monitoring AI systems for fairness; Accountability mechanisms for software developers and organizations.
  • Data Collection and Preprocessing for Fairness: Techniques for collecting unbiased and representative data; Data preprocessing methods to ensure fairness; Handling missing data and data augmentation for fairness.
  • Fairness in Human-Computer Interaction (HCI): Designing user interfaces that promote fairness; User perception and trust in fair AI systems; Inclusive design practices and accessibility considerations.
  • Mitigating Fairness Issues in Software Development Lifecycle: Integrating fairness checks in the software development process; Tools and frameworks for developing fair software; Best practices for collaborative and inclusive development teams.
  • Evaluating and Benchmarking Fairness: Standard datasets and benchmarks for fairness evaluation; Comparative studies of fairness metrics and algorithms; Real-world deployment and evaluation of fairness interventions.
  • Software Fairness Debt: Definition and conceptualization of software fairness debt; Impact of fairness debt on long-term system performance and trust; Identifying and Measuring Fairness Debt; Managing and Mitigating Fairness Debt; Economic and Organizational Impact of Fairness Debt; Technical Approaches to Fairness Debt Remediation.

We welcome articles presenting novel and strong contributions to deal with software fairness, including (i) state-of-the-art methods, models, and tools (with evidence of use and study of practical impact) or bridging the gap between practice and research, (ii) empirical studies in the field, addressing one or many human, technical, social, and economic issues of software fairness through qualitative and/or quantitative analyses, and (iii) industrial experiences, including good practices and lessons learned from managing software fairness in specific contexts or domains.

Dates

Authors must comply with the following deadlines:

Deadline Time
Abstract Submission November 22, 2024 - 23:59 PM (AoE)
Paper Submission November 29, 2024 - 23:59 PM (AoE)
Author Notification December 20, 2024 - 23:59 PM (AoE)
Camera Ready January 10, 2025 - 23:59 PM (AoE)
Workshop March 4, 2025

SUBMISSION

Submitted papers must have been neither previously accepted for publication nor concurrently submitted for review in another journal, book, conference, or workshop. All submissions must come in PDF format and conform, at the time of submission, to the IEEE Conference Proceedings Formatting Guidelines. Submissions can be of the following types:

  • Regular Papers: Up to 8 pages, including references. Regular papers must describe original contributions in research and/or practice. Although they can be work-in-progress, the authors must present a clear path forward. These will be given a 20-minute presentation during the workshop.
  • Short Papers: Up to 4 pages, including references. Short papers encompass position papers, experience reports, work-in-progress, new trends papers, industrial reports, datasets and tools. These will be given a 10-minutes presentation during the workshop.

The workshop will follow a double-anonymous peer review process in alignment with SANER’s Review Process policies.

EVALUATION CRITERIA

Research papers will be reviewed by at least two members of the Program Committee. Submissions will be evaluated based on the following criteria:

  • Soundness: The extent to which the paper’s contributions are supported by rigorous application of appropriate research methods.
  • Significance: The extent to which the paper’s contributions are important with respect to software engineering challenges.
  • Novelty: The extent to which the contribution is sufficiently original and is clearly explained with respect to the state-of-the-art.
  • Verifiability: The extent to which the paper includes sufficient information to support independent verification or replication of the paper’s claimed contributions.
  • Presentation: The extent to which the paper’s quality of writing meets the standards of Fairness, including clear descriptions and explanations, appropriate use of the English language, absence of major ambiguity, clearly readable figures and tables, and adherence to the formatting instructions provided below.

Accepted papers will become part of the workshop proceedings, which will be included as a separate section of the proceedings of the main conference. Authors of selected papers will be invited to submit extended versions of their work to the Information and Software Technology Special Issue on Software Fairness.

Organization

Ronnie de Souza Santos

Co-Chair University of Calgary

Rodrigo Spínola

Co-Chair Virginia Commonwealth University

Felipe Fronchetti

Media Chair Louisiana State University

Cleyton Magalhães

Media Chair Federal Rural University of Pernambuco

Program Committee

  • Alvine Boaye Belle, York University, Canada
  • Ann Barcomb, University of Calgary, Canada
  • Brittany Johnson, George Mason University, United States
  • Carolyn Seaman, University of Maryland Baltimore County, United States
  • Christoph Treude, Singapore Management University, Singapore
  • Cristina Martinez Montes, Chalmers, University of Gothenburg
  • Diego Elias Damasceno Costa, Concordia University, Canada
  • Emeralda Sesari, University of Groningen, Netherlands
  • Fabio de Abreu Santos, Colorado State University, United States
  • Fabio Palomba, University of Salerno, Italy
  • Francisco Gomes, Chalmers, University of Gothenburg
  • George Augusto Valenca Santos, Federal Rural University of Pernambuco
  • Gema Rodriguez-Perez, University of British Columbia, Canada
  • Giuseppe Destefanis, Brunel University, United Kingdom
  • Jeffrey Carver, University of Alabama, United States
  • Kelly Blincoe, University of Auckland, New Zealand
  • Kiev Gama, Universidade Federal de Pernambuco, Brazil
  • Kostadin Damevski, Virginia Commonwealth University, United States
  • Lorenzo De Carli, University of Calgary, Canada
  • Mairieli Wessel, University of Groningen, Netherlands
  • Max Hort, Simula Research Laboratory, Oslo, Norway
  • Savio Freire, Federal Institute of Ceara, Brazil
  • Sherlock A. Licorish, University of Otago, New Zealand
  • Tim Menzies, North Carolina State University, United States
  • Vladimir Mandic, University of Novi Sad, Serbia

Contact

In case you have any questions, don't hesitate to contact us:

Ronnie de Souza: ronnie.desouzasantos@ucalgary.ca
Rodrigo Spínola: spinolaro@vcu.edu