{"ID":35617,"CreatedAt":"2026-02-27T13:00:40Z","UpdatedAt":"2026-02-27T13:00:40Z","DeletedAt":null,"paper_url":"https://paperswithcode.com/paper/censoring-representations-with-an-adversary","arxiv_id":"1511.05897","title":"Censoring Representations with an Adversary","abstract":"In practice, there are often explicit constraints on what representations or\ndecisions are acceptable in an application of machine learning. For example it\nmay be a legal requirement that a decision must not favour a particular group.\nAlternatively it can be that that representation of data must not have\nidentifying information. We address these two related issues by learning\nflexible representations that minimize the capability of an adversarial critic.\nThis adversary is trying to predict the relevant sensitive variable from the\nrepresentation, and so minimizing the performance of the adversary ensures\nthere is little or no information in the representation about the sensitive\nvariable. We demonstrate this adversarial approach on two problems: making\ndecisions free from discrimination and removing private information from\nimages. We formulate the adversarial model as a minimax problem, and optimize\nthat minimax objective using a stochastic gradient alternate min-max optimizer.\nWe demonstrate the ability to provide discriminant free representations for\nstandard test problems, and compare with previous state of the art methods for\nfairness, showing statistically significant improvement across most cases. The\nflexibility of this method is shown via a novel problem: removing annotations\nfrom images, from unaligned training examples of annotated and unannotated\nimages, and with no a priori knowledge of the form of annotation provided to\nthe model.","short_abstract":"The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.","url_abs":"http://arxiv.org/abs/1511.05897v3","url_pdf":"http://arxiv.org/pdf/1511.05897v3.pdf","authors":"[\"Harrison Edwards\", \"Amos Storkey\"]","published":"2015-11-18T00:00:00Z","tasks":"[\"Fairness\"]","methods":"[]","has_code":false,"code_links":[{"ID":200888,"CreatedAt":"2026-02-27T13:01:31Z","UpdatedAt":"2026-02-27T13:01:31Z","DeletedAt":null,"paper_id":35617,"paper_url":"https://paperswithcode.com/paper/censoring-representations-with-an-adversary","paper_title":"Censoring Representations with an Adversary","repo_url":"https://github.com/sanchom/algorithmic-decision-making-and-rule-of-law","is_official":false,"mentioned_in_paper":false,"mentioned_in_github":true,"framework":"none","github_stars":0},{"ID":501049,"CreatedAt":"2026-03-04T21:00:12Z","UpdatedAt":"2026-03-04T21:00:12Z","DeletedAt":null,"paper_id":35617,"paper_url":"https://paperswithcode.com/paper/censoring-representations-with-an-adversary","paper_title":"Censoring Representations with an Adversary","repo_url":"https://github.com/sanchom/algorithmic-decision-making-and-rule-of-law","is_official":false,"mentioned_in_paper":false,"mentioned_in_github":true,"framework":"none","github_stars":0}]}
