Employee Portrait

At ICD Business School, our employees bring inestimable value and contribute fully to our success. Through a series of interviews, we invite you to enter the captivating world of their professions and immerse yourself in the many experiences that have shaped their careers, offering a privileged look at their contribution to the dynamism of our school.

Employee Portrait, Christian Goglin - Associate Professor and Head of CEDIA, the Center of Expertise in Data and AI of ICD Business School

Please introduce yourself.

Hello, Christian Goglin, I'm an Associate Professor at ICD Business School, I teach Market Finance, AI and AI Ethics, and I'm in charge of CEDIA, the Center for Expertise in Data and AI, attached to our research laboratory

What is your research topic?

One of my current research topics concerns operational ways of making decision support systems based on Machine Learning models more ethical.

Why this topic? What triggered it?

One of my motivations as a teacher-researcher is to produce useful knowledge that serves the general interest. In addition, I am developing several modules on the ethics of AI, integrating multiple technical, legal, philosophical and societal aspects. This research is therefore consistent with my pedagogical activity, as research at ICD Business School is also intended to feed the programs.

What's at stake? The reality in 2023 (current situation)?

AI ethics is a hot topic, and the scientific community, in all its diversity (this subject is multidisciplinary by nature), is hard at work on it. While this research topic is by no means new, the excitement has undoubtedly been at its peak since the emergence of the generative AI media phenomenon, with ChatGPT as its figurehead.
The stakes are certainly high, and some see AI developments as generating existential risks for humanity. My view is that these risks are, at least for the time being, a matter of speculation. On the other hand, many ethical and societal risks already exist and need to be addressed. These responses are first and foremost of a regulatory nature, but not the only one: technical solutions need to be invented to address certain specific issues (explicability, algorithmic discrimination, etc.). There are also issues of education and acculturation in the social sciences and ethics for engineers but also a need to educate users, in general, to a minimum of technical skills… Finally, good practices and rules of governance must also emerge (Ethic by design, transdisciplinary ethics committees in organizations...).

What do you see as areas for improvement? Where are we heading in the future?

With regard to the subject I mentioned, making decision support systems based on Machine Learning models more ethical, I'm focusing on the narrower theme of group fairness, which in essence requires that, on average, a Machine Learning decision support system should not differentially treat subgroups making up the population. For example, women should receive the same average treatment as men. In this context, I believe it is possible to go beyond the current state of the art, which consists in adding a group equity constraint to the model's learning objective. I propose to integrate into the model a hierarchical system of moral values (derived from an authoritative frame of reference), but also to take into account the negative impacts, in socio-economic terms, of prediction errors for all stakeholders at risk on the decision. This integration is achieved through a set of weights whose calibration (on a factual basis as far as possible) must result from a democratic process including experts (ethicists, jurists, data scientists, sociologists and psychologists) as well as stakeholder representatives, from civil society and at risk in the use case under consideration, at the right level depending on the subject (organization, state or region).

In conclusion, the aim of this research is to provide food for thought for regulators (starting with the European Commission) who, through a regulation defining technical standards, will have to specify the practical ways of operationalizing the non-discrimination requirements present in the AI Act, the major regulatory text on AI currently in gestation.