At the invitation of Netspar, Arjen van der Heide and I recently gave a flash webinar on sustainable investment by Dutch pension funds. Our webinar was part of the 11th edition of the Pensioen3daagse, an annual public awareness campaign organized by Wijzer in Geldzaken, an initiative of the Netherlands Ministry of Finance.
Pension is a difficult topic and many people don’t know the details of their own retirement savings. Each year, the Pensioen3daagse hosts special events to inform Dutch citizens in an accessible manner, in collaboration with Netspar and other partners. This year, more than 2.000 people signed up to attend the flash webinars on a range of current themes related to pension.
Coinciding with the COP26 climate summit in Glasgow, the Pensioen3daagse offered a great opportunity to shed additional light on sustainable investment by Dutch pension funds. To introduce our audience to an otherwise very technical topic, Arjen and I organized our lecture around five questions:
What are the origins of sustainable investment?
What does sustainable investment entail?
Is sustainable investment profitable?
What are the pros and cons of sustainable investment?
What can pension plan members do themselves?
Broadcasting from the Aegon studio, we answered each of these questions in a 20-minute presentation. Aalt-Jan Smits assisted as online moderator.
For more information on our ongoing research on sustainable pension investment, click here.
Together with Philipp Golka and Aalt-Jan Smits (both Leiden University), I recently participated in two roundtable discussions with trustees and internal supervisors from the Dutch pension sector. During the roundtables, organized by Monitoring Commission of the Dutch Pension Fund Code (Monitoringcommissie Code Pensioenfondsen), my colleagues and I presented the first results of our research project on internal supervision in Dutch pension funds. Internal supervision is a control function of pension fund boards that has been introduced by the legislator in 2014.
Drawing on a survey as well as in-depth qualitative interviews, the Leiden team described the experiences of internal supervisors in safeguarding participant preferences, diversity, and the learning capacity of their organization. In addition, we led a discussion among pension fund board members and internal supervisors by presenting a number of dilemmas that, according to our research, internal supervisors face as part of their responsibilities. The findings from these round table discussions will inform the final research report that aims at serving as a scientific basis for future improvement of the Dutch Pension Fund Code.
The roundtable discussions were held on September 22 and 28 2021 in Utrecht, The Netherlands. More information regarding the research project on internal supervision can be found here.
I am happy to share the news that I have recently been promoted to Associate Professor in Public Administration at the Institute of Public Administration as of September 1, 2021. My thanks goes out to my colleagues at the Institute of Public Administration for their support and these wonderful flowers.
As of January 2021, I will be joining the Editorial Board of Socio-Economic Review. I am very honored to be able to contribute to a journal that has been so important to my career!
Socio-Economic Review is one of the top journals in political economy and economic sociology. It’s currently ranked 11/180 for Political Science and 6/150 for Sociology, with an impact factor of 3.774 (2019).
Deze week verscheen mijn artikel “Van aandeelhouderskapitalisme naar neushoornobligaties” in de nieuwste editie van De Helling. In dit artikel bespreek ik het fenomeen financialisering en hoe zich dit manifesteert in onze verzorgingsstaat, de winkelstraat en in de natuur.
“De financiële wereld heeft inmiddels alle facetten van ons dagelijks leven in zijn greep: van boodschappen tot biodiversiteit. Deze zogeheten financialisering zorgt voor ongelijkheid, een schuldenberg en werknemers die niet als mensen maar als kapitaal worden gezien. Het financiële stelsel moet dan ook aan de ene kant eerlijker en socialer worden, en aan de andere kant moeten we onze afhankelijkheid van finance verminderen.“
I am happy to have been nominated as a candidate for the SASE Executive Council. The election for the Executive Council is currently underway.
A SASE member of more than 10 years, I am excited about the opportunity to join the Executive Council. My own interdisciplinary research interests make me particularly well-suited to represent colleagues working on new approaches to economic sociology and political economy, including junior and emerging scholars. If elected, I look forward to supporting new collaborations between research networks. As our profession is rethinking its strong reliance on face-to-face meetings for scholarly exchanges, I’m also interested in developing new and inclusive ways in which our members can engage with or participate in SASE, including online. Finally, I would like to support ongoing efforts to greening the organization and exploring new ways for SASE to thrive in the coming years.
If you are a SASE member, you can vote for the Executive Council here.
I am thrilled to announce that the Hans-Böckler-Stiftung has decided to fund our research project “Sustainability through company pension schemes? The influence of codetermination actors on investment strategies.” The funding will allow Karen Anderson, Tobias Wiss, myself and two postdoc to study this important topic for the next two years. Read more about the project below.
Sustainability through occupational pension schemes? The influence of codetermination actors on investment strategies
How can codetermination actors influence sustainable investments in occupational pension schemes? Considering this topic from an international comparative perspective is of crucial importance in view of the enormous increase in investments of capital-based pensions in EU countries. However, there is a lack of studies that investigate to what extent actors of co-determination influence investment decisions and use them for non-financial purposes. Investing in sustainable activities – i.e. taking ecological, social and corporate governance aspects into account – can be a means of realizing the union goal of a sustainable and just society. It is also important to clarify which standards of social-ecological investments exist and who regulates them and how.
The planned project therefore analyzes a) what influence actors of co-determination have on investments in occupational pension schemes, b) to what extent they take into account socio-ecological aspects and c) whether the preferences expressed at the beginning of the investment chain match the final investment. Answers to this are particularly relevant for Germany in view of the increasing importance of occupational pension schemes. The comparison with the Netherlands and Denmark – both countries with quasi-universal occupational pensions, strong trade union influence and substantial sustainable investments – provides important insights and recommendations for political actors in Germany.
We’re continuing our conversation on research ethics with Dr. Andrei Poama (Institute of Public Administration, Leiden University). Dr. Poama is an expert on the ethics of criminal justice. He is also a member of the Ethics Committee at the Faculty of Governance and Global Affairs. In our exchange with Dr. Poama, we discussed the ethical dilemmas confronting researchers in the social sciences, possible solutions to these dilemmas, and how the codes of conduct for Dutch researchers apply to graduate students. This is Part Two of two blog posts, in which we present the highlights from our conversation.
Please note: The guest talk has been modified to a question-and-answer format for easier reading. The spoken words have been edited for length and readability.*
A: The Code of Conduct for Research Integrity is the main document, which I think is quite clear and well done. It draws on the European equivalent. But the process is quite weird. So, when you open these codes, you see “oh, there are five leading principles for conducting ethical research.” And these principles are honesty, scrupulousness, transparency, independence and responsibility. And it’s like like “oh!”. You don’t really know where they are coming from and so on.
I’m not going to go through them, but I want you to know what – based on those principles – would count as research misconduct. One of them you already know and you have known about it since you were an undergrad. That’s plagiarism. But then the other two are fabrication and falsification. Fabrication is simply frauding, so fraudulent science, making up data. Falsification is when you have the data but you keep tweaking it, until the data says what you wanted to say. I think many – I wouldn’t say most – but many scientists, especially in the quantitative tradition, engage not necessarily in full falsification, but they keep chasing that p value: rearranging the data and massaging it, until they get statistical relevance. There are very interesting discussions about dropping the statistical significance level. Because the reason why we have the p value today is that you want genuine findings to be published. Now, what has happened because you have this standard [the p value] is that people keep chasing the standard and modifying the data, until the data fits the standard. But by doing so, they modify the data so much, that actually they end up falsifying it, at least to some extent.
Q: So what kind of solutions to these problems are there?
A: There are two things that are very interesting happening today and that are addressing the falsification problem. One of these things is pre-registration. Pre-registration means that you have these online platforms, typically hosted by universities. So, for instance, my colleague Dr. Honorata Mazepus and I have been doing a survey experiment on how the socio-economic status of criminal offenders (whether they are poor or not) affects people’s judgments about whether to blame and punish the offenders. Before you even run the experiment, you go [to the platform] and you submit a document with your hypothesis and with your theory. Then you’re committed to those hypotheses, before you run the experiment. And it’s a requirement, when you submit the findings of the experiment, that you also submit the link with your pre-registered hypotheses […]. And the other thing is the discussion about dropping the p value.
There is novelty or positive findings bias in the way science, and in particular social science but also in medical science, as it is practiced today. Your stuff is only going to get published, if you find something. If you manage to some degree to confirm the hypothesis you’re after. If you found no relationship, so the null hypothesis holds, then no one is interested. One thing happening right now is that you have a few null hypothesis or negative findings journals […].
So that’s also interesting and it’s being debated in the replication crisis that you might have heard of, especially in social psychology. You also find it in management studies. Basically, only about 30-40% of social psychology studies are being replicated. So the remaining 60-70 % is not being replicated.
Q: I think most of our students would probably either do a non-experimental survey or qualitative interviews rather than an experiment. So how would falsification play into [those methods]?
A: You can falsify anything, really. You could also falsify the findings of a survey. There are different ways of falsification. You can say, for instance, my sample is the students in the Masters or the undergraduate [program], but then you also make your Qualtrics survey link available to family and friends. So you might have people who are not part of the sample or population that will be part of [it], but you just don’t say it. And there are ways of checking that, but I don’t think [anyone] is realistically going to do that.
Q: What I find a little bit tricky with interviews is, when you want to use quotes from the interview and you have to polish the language a little bit, because you’re not going to type in all the ‘uhs’ and the ‘ahs’. So you would have to tweak it a little bit. I’m wondering what the fine line is between this acceptable editing on the one hand and then falsification on the other hand.
A: I mean, it’s hard to say. I think if you change the meaning, then then you’re clearly in the wrong. The way it happens with research misconduct, for example, is that there are always clearly cases of wrongdoing. So, for instance, this case would be a clear case of falsification. And there are of course very clear cases, where you would just simply report the data and the data is of high quality. There is no interview or no open-ended question, so then you don’t have to engage too much with the interpretation. There are other cases in-between. My metric there is that if you see yourself interpreting the data in a way that tends to confirm your hypothesis – if you’re too friendly to your hypothesis, to put it that way – then that’s a red light. You should try to be as uncharitable to your hypothesis as possible. Your job is to try to falsify the hypothesis.
Q: We have a question from a student, who is doing research on co-production. This involves sitting in on meetings with citizens. So how do you prevent, that you falsify or misinterpret the observations you make?
A: You can have a reflection on your interpretation, so you get at this meta-level where you basically say “well, this is what I’ve been doing and these are the weaknesses of my interpretation and these are the things that I’m not sure about.” Another thing that you should be doing is that you write after you have your observation moment. If you postpone it and do it two days later, you’ll have all sorts of memory deception effects that will be kicking in. That will be a problem. So just do it afterwards.
Q: A lot of the examples of ethical breaches are really big breaches. I think for a lot of us, who are trying to do our research as ethically as possible, these big examples are not always that useful. Because, you know, we are not going to fake respondents. But sometimes those smaller dilemmas are actually the most difficult ones. You’re on the Ethics Committee in our faculty, so I was wondering what are some of the most common issues that you observe and that we could learn from?
A: Well, they often have this kind of structure (see slide below). So this is a made-up example, which is partly based on my supervision experiences. You know, students would do stuff similar to this. Imagine that one of your colleagues wants to test five hypotheses about the impact of socio-economic inequalities on educational opportunities in the Netherlands. To test these hypotheses, he plans to interview three teachers from a low-income neighborhood in The Hague. He comes to you to ask for some research advice about how to proceed with the study. The question is, what do you advise him? Is there an ethical problem with his research?
Q: In the discussion, our students quickly noted some issues with this research design. The researcher only selects a low-income neighborhood as his case study, instead of selecting multiple neighborhood that vary in terms of average income levels. This constitutes selection bias. The researcher also aims to test five hypotheses, based on only three interviews. This is known as the ‘degrees of freedom’ problem. But these seem issues of research design, not of research ethics per se. So why should we question this study in terms of research ethics?
A: Many of the cases that individual researchers submit with us [the FGGA Ethics Committee] are like this, because of our The Hague mission […]. Put yourself in the shoes of either the teacher or the students [in the low-income neighborhood], who have given these interviews. Then, you know, the researcher is sending you the article and says “oh look, here are the findings.” So what does that does that do to you as a teacher or as a student?
Many of the problems that we receive on the Ethics Committee have this kind of structure. You know, we want to draw very general conclusions based on a very small sample, because we don’t have a lot of data. But I think that one thing that you can do as individual researchers is to be very critical about the scope and range of your conclusions. Be very critical about the fact that this is not going to apply to the whole population. If you, as a part of this population, are reading about this research, especially as a teacher, I would imagine that basically in encounters with other teachers you will feel lower, less important, kind of responsible for this happening. And so I think one of the frequent problems that we do have is this stigmatization effect or potential for stigmatization.
Q: So what can you do?
A: I think there are two things that we could also do as individuals. One is to take charge of the science communication process […]. It’s often not the scientists themselves or the researchers themselves who are doing the communication about the findings (unless you’re talking about Twitter or Facebook), it’s someone else. And I think one thing that we could do is to take hold of the communication process. So to actually present that data ourselves, because we have more nuance in the way which we present. Of course, to do that, we would need more time, and time is very scarce in academia today.
And the second thing is… I will just give this example. I have my second Masters in criminology and I had this amazing teacher who was doing participant observation on offenders convicted for domestic violence charges. The way she was doing it was through interviews and just sitting there and observing things. And then she wrote her article thing and used, you know, fancy nice academic language. But then before actually sending the article for publication, she did one very interesting thing: she took the manuscript and sent it back to the prisoners. So when the article got published, the title was “being a nosy bloody cow”, because that was the reaction of one of the inmates.
So one thing that you can do, especially if you see there is a potential for stigmatization, is to promise to give voice to your participants. And if you’re real about giving voice, you can do that in the actual content of your research products.
Q: To sum up?
A: So two things. One, you are not a student, you are the actual author of the research that you’re going to produce. And the participants that you’re going to work with are in some sense co-authors of that work. So don’t be shy about what you did. And two, if you already have a draft, you can send it back to the participants, to the people who have generated knowledge for you. Do it, especially if you are doing qualitative research, because that is a way of giving people some control over what you’re going to say about them.
There is nothing fixed about those five principles. Principles don’t apply in an obvious way across cases. Even between the principle and the case, there is this thing called judgment. You have to exert your judgment about a) whether the principle applies at all and b) how and to what extent the principle is going to apply. So one obvious principle that would apply [in the earlier example] is scrupulousness, that you show care in the way that you produce knowledge, gather knowledge and disseminate it.
Q: Thank you, Andrei, for sharing your thoughts on research ethics with us. And thank you to our students for their insightful questions!
*With thanks to Brecht, Edo, Meike-Yang and Nev for their insightful comments and questions.
When my co-teacher Janna and I set out to redesign our normally face-to-face course to accommodate the pivot to online learning this past semester, we were not sure what to do. The Covid-19 lockdown seemed to call for an altogether new approach to online teaching. In three blogs posts, we’ll describe how we revised our course design, the practicalities of lockdown teaching, and why our students called our course “the gold standard of online teaching” by the end of the semester.
Part 2: The practicalities of lockdown teaching
In Part 1 of this short series, I outlined our approach to course design, which combined synchronous and asynchronous forms of learning. Our aim in the course was to create an inclusive learning environment for those students able to attend our weekly online seminars as well as those students who followed the course asynchronously. In this post, I will address how we put our initial ideas into practice. In short, we found out that three things proved to be particularly important when teaching online during a lockdown:
Take the small talk seriously: making space in our course for chitchat and non-teaching related banter helped create an online community between us and our students. It made students more at ease, when participating in the online chats and breakout sessions. They also indicated feeling more comfortable signaling to us, when they were struggling with the course.
Make connections between synchronous and asynchronous learners: having to take a course remotely is difficult enough, let alone doing mostly on your own. We wanted to make sure that asynchronous learners did not feel as if they were excluded from what was going on in the online seminars. We made use of the interactive features on the course management page (discussions, blog posts, Wikis) and created joint exercises for synchronous and asynchronous learners to overcome this obstacle.
Make sure to check in: in our department, few students make use of office hours. We therefore feared that remote learners might not contact us, when struggling with the course. Our solution was to make attending our office hours part of the participation grade. This way, we gave a strong signal that attending office hours was expected from students. It helped us give extra attention to students who needed it.
Running the live seminars
Each week, we would meet our students for three hours during an online seminar. The seminars took place in a Kaltura Live Room, the online teaching platform acquired by our university. The Live Room made it possible for us to show slides, use a whiteboard, share our screen, have students work in break-out groups, and several other things that helped approximate a face-to-face classroom setting. Managing multiple functionalities at once proved difficult. Since we were co-teaching, one of us would lecture or lead discussion with the students, while the other person would monitor the chat or activate tools when needed.
We made sure to start each seminar with some small talk, with topics ranging from Netflix recommendations to the small joys of freshly baked pastries and park picnics during lockdown. Small talk proved to be important for our seminars for several reasons: it introduced a semblance of normal social interactions in our course; it opened the discussions in the chat, making students more comfortable to contribute; and it allowed us to do a quick check before each seminar to see how everyone was doing.
It is important to note that we did not shy away from sharing our own experiences with the students. After one of us had a bad day, about half-way into the course and into the lockdown, and expressed as much during the small talk, several students expressed feeling more comfortable admitting that they were struggling as well. In hindsight, this became one of the most appreciated features of our course (see also below on course evaluations).
Our seminars then followed a standard structure. Having three hours at our disposal, we would dedicate the first hour to a short lecture. One of us would talk, supported by slides and other visual aids. The other would monitor the chat. We made sure to make the lecture interactive by including brief surveys, pose questions for students to answer in the chat, or share links to additional online resources. The lecture would end with a short assignment, related to the week’s lecture topic. During the second hour, students worked together in break-out groups to do the assignment. While the assignment would rarely take a full hour to complete, we wanted students to have enough time to take breaks and to chat amongst themselves. For this reason, we did not enter the break-out groups, unless invited by the students (for instance, when they had a question). The third hour then was dedicated to presentations: the various groups would report back on their completed assignments and some students would present their blogs. We would end each seminar with a general discussion, to which students could contribute via webcam or chat.
For the asynchronous learners, we recorded the lecture component of each seminar. Break-out groups and class discussions were not recorded. We feared that students present in the online classroom would be more reluctant to actively participate, if their comments and remarks were ‘on-the-record’. After each seminar, we would post the lecture video on our learning management system (Blackboard).
We added also several features on our learning management system that would help asynchronous learners understand the learning materials and keep engaged with the course. First, we created several discussion threads, where students could pose questions. One thread was dedicated solely to organizational matters to the course; others were structured around each course week and invited questions of a substantive nature. Second, we created a glossary of difficult terms and concepts from the course readings, for which we used the Wiki function in our learning management system. Students were asked to post any terms they were struggling with or to post definitions of listed concepts that they already knew. Finally, we posted students’ blogs on the course page and asked students to use the comments function to ask questions or provide feedback on the blogs.
While we designed our course page on the learning management system predominantly with the asynchronous learners in mind, we were pleasantly surprised to see it helped forge connections between synchronous and asynchronous learners in our course: students answered each other’s questions in the online forums and they engaged in lengthy discussions around the blogs, sometimes over several weeks. To a large degree, these interactions were unforeseen. While we had aimed to incentivize students to interact with each other by giving them a participation grade (weighed at 20% of the final grade), our students had initially misunderstood our instructions to mean they were assessed either on synchronous learning activities or on asynchronous learning activities. When synchronous learners used the interactive features on the learning management system, they told us they did so for their own enjoyment of communicating with other students.
One of the mechanisms at our disposal were the exercises that we gave students in the online seminars to work on in breakout. We would distribute the same exercises to the asynchronous learners, who would e-mail us their completed work. The exercises always involved a small research task, that helped connect the themes from the course readings to current events. To give an example: in our week on corporate social responsibility, students explored public corporations’ charitable giving and other responses to the corona virus pandemic and compared these against the measures taken to benefit the corporations’ shareholders. During the live session, each breakout group had done research on some of the world’s largest firms. We collected the results in a shared Google Drive file, to which asynchronous learners would add the findings from their own self-study efforts. The result was a collectively assembled dataset. Curious about other exercises? Click here.
Finally, we wanted to create a welcoming environment for students to interact with us, the course instructors. Again, we predominantly had asynchronous learners in mind. Since we would not meet our students in person for the duration of the course, we were afraid that we would not be able to find out, when students struggled with their coursework during these strange times. We therefore included attending online office hours in our participation rubric, hoping to incentivize students to reach out to us. This worked out as expected: over the course of seven weeks, we spoke with almost all asynchronous learners in a one-on-one setting. While most conversations initially covered assignments or other substantive questions related to the course, they also provided an opening to talk about the – sometimes very serious – situations in which our students found themselves during the lockdown. In some cases, we were able to direct students to support services provided by our university; in other cases, we simply offered a listening ear. All in all, our office hours resulted in very meaningful conversations with our students, that we may not have had under normal circumstances.
Up next: how students experienced our online course