Abstract: Ranking schemes drive many real-world decisions, like, where to study, whom to hire, what to buy, etc. Many of these decisions often come with high consequences. For example, a university can be deemed less prestigious if not featured in a top-k list, and consumers might not even explore products that do not get recommended to buyers. At the heart of most of these decisions are opaque ranking schemes, which dictate the ordering of data entities, but their internal logic is inaccessible or proprietary. Drawing inferences about the ranking differences is like a guessing game to the stakeholders, like, the rankees (i.e., the entities who are ranked, like product companies) and the decision-makers (i.e., who use the rankings, like buyers). In this paper, we aim to enable transparency in ranking interpretation by using algorithmic rankers that learn from available data and by enabling human reasoning about the learned ranking differences using explainable AI (XAI) methods. To realize this aim, we leverage the exploration-explanation paradigm of human-data interaction to let human stakeholders explore subsets and groupings of complex multi-attribute ranking data using visual explanations of model fit and attribute influence on rankings. We realize this explanation paradigm for transparent ranking interpretation in TRIVEA, a visual analytic system that is fueled by: i) visualizations of model fit derived from algorithmic rankers that learn the associations between attributes and rankings from available data and ii) visual explanations derived from XAI methods that help abstract important patterns, like, the relative influence of attributes in different ranking ranges. Using TRIVEA, end users not trained in data science have the agency to transparently reason about the global and local behavior of the rankings without the need to open black-box ranking models and develop confidence in the resulting attribute-based inferences. We demonstrate the efficacy of TRIVEA using multiple usage scenarios and subjective feedback from researchers with diverse domain expertise.
[Details] [Paper]
Abstract: Algorithmic rankers are ubiquitously applied in automated decision systems such as hiring, admission, and loan-approval systems. Without appropriate explanations, decision-makers often cannot audit or trust algorithmic rankers' outcomes. In recent years, XAI(explainable AI) methods have focused on classification models, but there for algorithmic rankers, we are yet to develop state-of-the-art explanation methods. Moreover, explanations are also sensitive to changes in data and ranker properties, and decision-makers need transparent model diagnostics for calibrating the degree and impact of ranker sensitivity. To fulfill these needs, we take a dual approach of: i) designing explanations by transforming Shapley values for the simple form of a ranker based on linear weighted summation and ii) designing a human-in-the-loop sensitivity analysis workflow by simulating data whose attributes follow user-specified statistical distributions and correlations. We leverage a visualization interface to validate the transformed Shapley values and draw inferences from them by leveraging multi-factorial simulations, including data distributions, ranker parameters, and rank ranges.
[Details] [Paper]Abstract: As automated decision systems (ADS) get more deeply embedded into business processes worldwide, there is a growing need for practical ways to establish meaningful transparency. Here we argue that universally perfect transparency is impossible to achieve. We introduce the concept of contextual transparency as an approach that integrates social science, engineering and information design to help improve ADS transparency for specific professions, business processes and stakeholder groups. We demonstrate the applicability of the contextual transparency approach by using it for a well-established ADS transparency tool: nutritional labels that display specific information about an ADS. Empirically, it focuses on the profession of recruiting. Presenting data from an ongoing study about ADS use in recruiting alongside a typology of ADS nutritional labels, we suggest a nutritional label prototype for ADS-driven rankers such as LinkedIn Recruiter before closing with directions for future work.
[Details] [Paper]
Abstract: Algorithmic rankers have a profound impact on our increasingly data-driven society. From leisurely activities like the movies that we watch, the restaurants that we patronize; to highly consequential decisions, like making educational and occupational choices or getting hired by companies-these are all driven by sophisticated yet mostly inaccessible rankers. A small change to how these algorithms process the rankees (i.e., the data items that are ranked) can have profound consequences. For example, a change in rankings can lead to deterioration of the prestige of a university or have drastic consequences on a job candidate who missed out being in the list of the preferred top-k for an organization. This paper is a call to action to the human-centered data science research community to develop principled methods, measures, and metrics for studying the interactions among the socio-technical context of use, technological innovations, and the resulting consequences of algorithmic rankings on multiple stakeholders. Given the spate of new legislations on algorithmic accountability, it is imperative that researchers from social science, human-computer interaction, and data science work in unison for demystifying how rankings are produced, who has agency to change them, and what metrics of socio-technical impact one must use for informing the context of use.
[Details] [Paper]