I am happy to have been nominated as a candidate for the SASE Executive Council. The election for the Executive Council is currently underway.
A SASE member of more than 10 years, I am excited about the opportunity to join the Executive Council. My own interdisciplinary research interests make me particularly well-suited to represent colleagues working on new approaches to economic sociology and political economy, including junior and emerging scholars. If elected, I look forward to supporting new collaborations between research networks. As our profession is rethinking its strong reliance on face-to-face meetings for scholarly exchanges, I’m also interested in developing new and inclusive ways in which our members can engage with or participate in SASE, including online. Finally, I would like to support ongoing efforts to greening the organization and exploring new ways for SASE to thrive in the coming years.
If you are a SASE member, you can vote for the Executive Council here.
We’re continuing our conversation on research ethics with Dr. Andrei Poama (Institute of Public Administration, Leiden University). Dr. Poama is an expert on the ethics of criminal justice. He is also a member of the Ethics Committee at the Faculty of Governance and Global Affairs. In our exchange with Dr. Poama, we discussed the ethical dilemmas confronting researchers in the social sciences, possible solutions to these dilemmas, and how the codes of conduct for Dutch researchers apply to graduate students. This is Part Two of two blog posts, in which we present the highlights from our conversation.
Please note: The guest talk has been modified to a question-and-answer format for easier reading. The spoken words have been edited for length and readability.*
A: The Code of Conduct for Research Integrity is the main document, which I think is quite clear and well done. It draws on the European equivalent. But the process is quite weird. So, when you open these codes, you see “oh, there are five leading principles for conducting ethical research.” And these principles are honesty, scrupulousness, transparency, independence and responsibility. And it’s like like “oh!”. You don’t really know where they are coming from and so on.
I’m not going to go through them, but I want you to know what – based on those principles – would count as research misconduct. One of them you already know and you have known about it since you were an undergrad. That’s plagiarism. But then the other two are fabrication and falsification. Fabrication is simply frauding, so fraudulent science, making up data. Falsification is when you have the data but you keep tweaking it, until the data says what you wanted to say. I think many – I wouldn’t say most – but many scientists, especially in the quantitative tradition, engage not necessarily in full falsification, but they keep chasing that p value: rearranging the data and massaging it, until they get statistical relevance. There are very interesting discussions about dropping the statistical significance level. Because the reason why we have the p value today is that you want genuine findings to be published. Now, what has happened because you have this standard [the p value] is that people keep chasing the standard and modifying the data, until the data fits the standard. But by doing so, they modify the data so much, that actually they end up falsifying it, at least to some extent.
Q: So what kind of solutions to these problems are there?
A: There are two things that are very interesting happening today and that are addressing the falsification problem. One of these things is pre-registration. Pre-registration means that you have these online platforms, typically hosted by universities. So, for instance, my colleague Dr. Honorata Mazepus and I have been doing a survey experiment on how the socio-economic status of criminal offenders (whether they are poor or not) affects people’s judgments about whether to blame and punish the offenders. Before you even run the experiment, you go [to the platform] and you submit a document with your hypothesis and with your theory. Then you’re committed to those hypotheses, before you run the experiment. And it’s a requirement, when you submit the findings of the experiment, that you also submit the link with your pre-registered hypotheses […]. And the other thing is the discussion about dropping the p value.
There is novelty or positive findings bias in the way science, and in particular social science but also in medical science, as it is practiced today. Your stuff is only going to get published, if you find something. If you manage to some degree to confirm the hypothesis you’re after. If you found no relationship, so the null hypothesis holds, then no one is interested. One thing happening right now is that you have a few null hypothesis or negative findings journals […].
So that’s also interesting and it’s being debated in the replication crisis that you might have heard of, especially in social psychology. You also find it in management studies. Basically, only about 30-40% of social psychology studies are being replicated. So the remaining 60-70 % is not being replicated.
Q: I think most of our students would probably either do a non-experimental survey or qualitative interviews rather than an experiment. So how would falsification play into [those methods]?
A: You can falsify anything, really. You could also falsify the findings of a survey. There are different ways of falsification. You can say, for instance, my sample is the students in the Masters or the undergraduate [program], but then you also make your Qualtrics survey link available to family and friends. So you might have people who are not part of the sample or population that will be part of [it], but you just don’t say it. And there are ways of checking that, but I don’t think [anyone] is realistically going to do that.
Q: What I find a little bit tricky with interviews is, when you want to use quotes from the interview and you have to polish the language a little bit, because you’re not going to type in all the ‘uhs’ and the ‘ahs’. So you would have to tweak it a little bit. I’m wondering what the fine line is between this acceptable editing on the one hand and then falsification on the other hand.
A: I mean, it’s hard to say. I think if you change the meaning, then then you’re clearly in the wrong. The way it happens with research misconduct, for example, is that there are always clearly cases of wrongdoing. So, for instance, this case would be a clear case of falsification. And there are of course very clear cases, where you would just simply report the data and the data is of high quality. There is no interview or no open-ended question, so then you don’t have to engage too much with the interpretation. There are other cases in-between. My metric there is that if you see yourself interpreting the data in a way that tends to confirm your hypothesis – if you’re too friendly to your hypothesis, to put it that way – then that’s a red light. You should try to be as uncharitable to your hypothesis as possible. Your job is to try to falsify the hypothesis.
Q: We have a question from a student, who is doing research on co-production. This involves sitting in on meetings with citizens. So how do you prevent, that you falsify or misinterpret the observations you make?
A: You can have a reflection on your interpretation, so you get at this meta-level where you basically say “well, this is what I’ve been doing and these are the weaknesses of my interpretation and these are the things that I’m not sure about.” Another thing that you should be doing is that you write after you have your observation moment. If you postpone it and do it two days later, you’ll have all sorts of memory deception effects that will be kicking in. That will be a problem. So just do it afterwards.
Q: A lot of the examples of ethical breaches are really big breaches. I think for a lot of us, who are trying to do our research as ethically as possible, these big examples are not always that useful. Because, you know, we are not going to fake respondents. But sometimes those smaller dilemmas are actually the most difficult ones. You’re on the Ethics Committee in our faculty, so I was wondering what are some of the most common issues that you observe and that we could learn from?
A: Well, they often have this kind of structure (see slide below). So this is a made-up example, which is partly based on my supervision experiences. You know, students would do stuff similar to this. Imagine that one of your colleagues wants to test five hypotheses about the impact of socio-economic inequalities on educational opportunities in the Netherlands. To test these hypotheses, he plans to interview three teachers from a low-income neighborhood in The Hague. He comes to you to ask for some research advice about how to proceed with the study. The question is, what do you advise him? Is there an ethical problem with his research?
Q: In the discussion, our students quickly noted some issues with this research design. The researcher only selects a low-income neighborhood as his case study, instead of selecting multiple neighborhood that vary in terms of average income levels. This constitutes selection bias. The researcher also aims to test five hypotheses, based on only three interviews. This is known as the ‘degrees of freedom’ problem. But these seem issues of research design, not of research ethics per se. So why should we question this study in terms of research ethics?
A: Many of the cases that individual researchers submit with us [the FGGA Ethics Committee] are like this, because of our The Hague mission […]. Put yourself in the shoes of either the teacher or the students [in the low-income neighborhood], who have given these interviews. Then, you know, the researcher is sending you the article and says “oh look, here are the findings.” So what does that does that do to you as a teacher or as a student?
Many of the problems that we receive on the Ethics Committee have this kind of structure. You know, we want to draw very general conclusions based on a very small sample, because we don’t have a lot of data. But I think that one thing that you can do as individual researchers is to be very critical about the scope and range of your conclusions. Be very critical about the fact that this is not going to apply to the whole population. If you, as a part of this population, are reading about this research, especially as a teacher, I would imagine that basically in encounters with other teachers you will feel lower, less important, kind of responsible for this happening. And so I think one of the frequent problems that we do have is this stigmatization effect or potential for stigmatization.
Q: So what can you do?
A: I think there are two things that we could also do as individuals. One is to take charge of the science communication process […]. It’s often not the scientists themselves or the researchers themselves who are doing the communication about the findings (unless you’re talking about Twitter or Facebook), it’s someone else. And I think one thing that we could do is to take hold of the communication process. So to actually present that data ourselves, because we have more nuance in the way which we present. Of course, to do that, we would need more time, and time is very scarce in academia today.
And the second thing is… I will just give this example. I have my second Masters in criminology and I had this amazing teacher who was doing participant observation on offenders convicted for domestic violence charges. The way she was doing it was through interviews and just sitting there and observing things. And then she wrote her article thing and used, you know, fancy nice academic language. But then before actually sending the article for publication, she did one very interesting thing: she took the manuscript and sent it back to the prisoners. So when the article got published, the title was “being a nosy bloody cow”, because that was the reaction of one of the inmates.
So one thing that you can do, especially if you see there is a potential for stigmatization, is to promise to give voice to your participants. And if you’re real about giving voice, you can do that in the actual content of your research products.
Q: To sum up?
A: So two things. One, you are not a student, you are the actual author of the research that you’re going to produce. And the participants that you’re going to work with are in some sense co-authors of that work. So don’t be shy about what you did. And two, if you already have a draft, you can send it back to the participants, to the people who have generated knowledge for you. Do it, especially if you are doing qualitative research, because that is a way of giving people some control over what you’re going to say about them.
There is nothing fixed about those five principles. Principles don’t apply in an obvious way across cases. Even between the principle and the case, there is this thing called judgment. You have to exert your judgment about a) whether the principle applies at all and b) how and to what extent the principle is going to apply. So one obvious principle that would apply [in the earlier example] is scrupulousness, that you show care in the way that you produce knowledge, gather knowledge and disseminate it.
Q: Thank you, Andrei, for sharing your thoughts on research ethics with us. And thank you to our students for their insightful questions!
*With thanks to Brecht, Edo, Meike-Yang and Nev for their insightful comments and questions.
When my co-teacher Janna and I set out to redesign our normally face-to-face course to accommodate the pivot to online learning this past semester, we were not sure what to do. The Covid-19 lockdown seemed to call for an altogether new approach to online teaching. In three blogs posts, we’ll describe how we revised our course design, the practicalities of lockdown teaching, and why our students called our course “the gold standard of online teaching” by the end of the semester.
Part 2: The practicalities of lockdown teaching
In Part 1 of this short series, I outlined our approach to course design, which combined synchronous and asynchronous forms of learning. Our aim in the course was to create an inclusive learning environment for those students able to attend our weekly online seminars as well as those students who followed the course asynchronously. In this post, I will address how we put our initial ideas into practice. In short, we found out that three things proved to be particularly important when teaching online during a lockdown:
Take the small talk seriously: making space in our course for chitchat and non-teaching related banter helped create an online community between us and our students. It made students more at ease, when participating in the online chats and breakout sessions. They also indicated feeling more comfortable signaling to us, when they were struggling with the course.
Make connections between synchronous and asynchronous learners: having to take a course remotely is difficult enough, let alone doing mostly on your own. We wanted to make sure that asynchronous learners did not feel as if they were excluded from what was going on in the online seminars. We made use of the interactive features on the course management page (discussions, blog posts, Wikis) and created joint exercises for synchronous and asynchronous learners to overcome this obstacle.
Make sure to check in: in our department, few students make use of office hours. We therefore feared that remote learners might not contact us, when struggling with the course. Our solution was to make attending our office hours part of the participation grade. This way, we gave a strong signal that attending office hours was expected from students. It helped us give extra attention to students who needed it.
Running the live seminars
Each week, we would meet our students for three hours during an online seminar. The seminars took place in a Kaltura Live Room, the online teaching platform acquired by our university. The Live Room made it possible for us to show slides, use a whiteboard, share our screen, have students work in break-out groups, and several other things that helped approximate a face-to-face classroom setting. Managing multiple functionalities at once proved difficult. Since we were co-teaching, one of us would lecture or lead discussion with the students, while the other person would monitor the chat or activate tools when needed.
We made sure to start each seminar with some small talk, with topics ranging from Netflix recommendations to the small joys of freshly baked pastries and park picnics during lockdown. Small talk proved to be important for our seminars for several reasons: it introduced a semblance of normal social interactions in our course; it opened the discussions in the chat, making students more comfortable to contribute; and it allowed us to do a quick check before each seminar to see how everyone was doing.
It is important to note that we did not shy away from sharing our own experiences with the students. After one of us had a bad day, about half-way into the course and into the lockdown, and expressed as much during the small talk, several students expressed feeling more comfortable admitting that they were struggling as well. In hindsight, this became one of the most appreciated features of our course (see also below on course evaluations).
Our seminars then followed a standard structure. Having three hours at our disposal, we would dedicate the first hour to a short lecture. One of us would talk, supported by slides and other visual aids. The other would monitor the chat. We made sure to make the lecture interactive by including brief surveys, pose questions for students to answer in the chat, or share links to additional online resources. The lecture would end with a short assignment, related to the week’s lecture topic. During the second hour, students worked together in break-out groups to do the assignment. While the assignment would rarely take a full hour to complete, we wanted students to have enough time to take breaks and to chat amongst themselves. For this reason, we did not enter the break-out groups, unless invited by the students (for instance, when they had a question). The third hour then was dedicated to presentations: the various groups would report back on their completed assignments and some students would present their blogs. We would end each seminar with a general discussion, to which students could contribute via webcam or chat.
For the asynchronous learners, we recorded the lecture component of each seminar. Break-out groups and class discussions were not recorded. We feared that students present in the online classroom would be more reluctant to actively participate, if their comments and remarks were ‘on-the-record’. After each seminar, we would post the lecture video on our learning management system (Blackboard).
We added also several features on our learning management system that would help asynchronous learners understand the learning materials and keep engaged with the course. First, we created several discussion threads, where students could pose questions. One thread was dedicated solely to organizational matters to the course; others were structured around each course week and invited questions of a substantive nature. Second, we created a glossary of difficult terms and concepts from the course readings, for which we used the Wiki function in our learning management system. Students were asked to post any terms they were struggling with or to post definitions of listed concepts that they already knew. Finally, we posted students’ blogs on the course page and asked students to use the comments function to ask questions or provide feedback on the blogs.
While we designed our course page on the learning management system predominantly with the asynchronous learners in mind, we were pleasantly surprised to see it helped forge connections between synchronous and asynchronous learners in our course: students answered each other’s questions in the online forums and they engaged in lengthy discussions around the blogs, sometimes over several weeks. To a large degree, these interactions were unforeseen. While we had aimed to incentivize students to interact with each other by giving them a participation grade (weighed at 20% of the final grade), our students had initially misunderstood our instructions to mean they were assessed either on synchronous learning activities or on asynchronous learning activities. When synchronous learners used the interactive features on the learning management system, they told us they did so for their own enjoyment of communicating with other students.
One of the mechanisms at our disposal were the exercises that we gave students in the online seminars to work on in breakout. We would distribute the same exercises to the asynchronous learners, who would e-mail us their completed work. The exercises always involved a small research task, that helped connect the themes from the course readings to current events. To give an example: in our week on corporate social responsibility, students explored public corporations’ charitable giving and other responses to the corona virus pandemic and compared these against the measures taken to benefit the corporations’ shareholders. During the live session, each breakout group had done research on some of the world’s largest firms. We collected the results in a shared Google Drive file, to which asynchronous learners would add the findings from their own self-study efforts. The result was a collectively assembled dataset. Curious about other exercises? Click here.
Finally, we wanted to create a welcoming environment for students to interact with us, the course instructors. Again, we predominantly had asynchronous learners in mind. Since we would not meet our students in person for the duration of the course, we were afraid that we would not be able to find out, when students struggled with their coursework during these strange times. We therefore included attending online office hours in our participation rubric, hoping to incentivize students to reach out to us. This worked out as expected: over the course of seven weeks, we spoke with almost all asynchronous learners in a one-on-one setting. While most conversations initially covered assignments or other substantive questions related to the course, they also provided an opening to talk about the – sometimes very serious – situations in which our students found themselves during the lockdown. In some cases, we were able to direct students to support services provided by our university; in other cases, we simply offered a listening ear. All in all, our office hours resulted in very meaningful conversations with our students, that we may not have had under normal circumstances.
Up next: how students experienced our online course
When my co-teacher and I set out to redesign our normally face-to-face course to accommodate the pivot to online learning this past semester, we were not sure what to do. The Covid-19 lockdown seemed to call for an altogether new approach to online teaching. In three blogs posts, we’ll describe how we revised our course design, the practicalities of lockdown teaching, and why our students called our course “the gold standard of online teaching” by the end of the semester.
Part 1: Synchronous versus asynchronous learning – Why choose?
The graduate-level course Markets in the Welfare State is generally the highlight of my teaching year. It’s an elective on the topic of my research, meaning it’s that rare treat of a course in which I get to teach the topics I enjoy the most to students with a strong interest in the subject matter. This year, I was joined by Janna Goijaerts, PhD student and teacher-in-training at Leiden University.
The course started only a few weeks after our university had made the pivot to online teaching due to the corona virus pandemic. This meant that Janna and I had to radically reverse our standard course design. Like so many university teachers, we struggled with the choice between synchronous and asynchronous teaching. While educators on social media seemed to strongly prefer either one or the other, we were not so sure.
Our compromise was a course design that allowed students to do both: attend weekly online seminars or follow the course in their own time via the learning management system. We made sure to incorporate various interactive features. It worked. Both student performance and evaluation scores were up from the regular edition of the course. One student even deemed our course “the gold standard of online teaching.”
Hyperbole aside, we believe that we found a way to make online teaching enjoyable for both students and teachers who are largely used to face-to-face teaching, while not sacrificing performance. In the following three posts, we will therefore outline the main elements of our course design, describe how we ran our course, and report back on how students experienced our course.
We hope that our experience may be useful to other teachers, who like us are at the start of another semester of online teaching.
Reconsidering our course design
The pivot to online teaching proved to be more consequential to our course than ‘simply’ transforming a face-to-face course to an online setting. While the course normally attracts only around 15 students or so, this year more than 40 had signed up. This meant that Janna and I had to reverse our envisioned course design, once it became clear that social distancing would prevent any regular teaching. We had to rethink several elements of the course, such as our normal reliance on small group discussions and the substantial number of written assignments.
Like so many university teachers, we struggled with the choice between synchronous and asynchronous teaching. Synchronous learning refers to a situation, whereby students learn at the same time. Asynchronous learning occurs, when students learn at different times. Both terms refer to online education, whereby students are not in the same location when they learn. While many educators on social media seemed to strongly prefer either one or the other, we were not so sure. We asked ourselves:
How do we create an inclusive learning environment that is attentive to the needs of students who may be prevented from participating synchronously in our course, while at the same time serving the needs of students who prefer the structure and sociality of synchronous learning?
Our students come from incredibly diverse backgrounds. While most students are of Dutch origins, around one-third are internationals from all over the world. The Dutch students typically stayed in their dorms or temporarily moved in with their parents during the lockdown. Many (but certainly not all) international students traveled home to be with their families. Some students found themselves isolated and by themselves during the lockdown, others combined their studies with jobs or family care and felt overwhelmed. This diversity means that any choice for either synchronous or asynchronous would necessarily exclude certain students from our course.
Two learning tracks
Having already taught courses online, I felt confident enough to answer our question with a “why not do both?” So we designed our course specifically with two forms of participation in mind. The first option was to participate synchronously in the course. This meant attending weekly live seminars on Wednesday mornings between 9:15am and 12pm. The live seminars were the online equivalents of our regular face-to-face meetings. The live seminars consisted of short lectures on the course readings, presentations of cases, short exercises, and group discussions. Attendance in the live seminars was not mandatory, but we did expect active participation during these sessions.
We did not use the full three hours for our live seminars. Instead, we divided the time available into three timeslots. In the first hour, we would present a lecture, supported by slides. During the second hour, students would meet in break-out groups to do a collaborative exercise. During this time, students were also free to schedule breaks as desired. In the third hour, we would meet back in the online classroom for a plenary discussion of the exercise.
The second option was to participate asynchronously in the course, by which students used the interactive features on the course management system in their own time. Instead of attending the live seminars, students could view the video recordings of the lecture component of the seminars. Students did individual versions of the seminar exercises in their own time. They could comment on blogs or contribute to the course Wiki. Students were also strongly encouraged to make an appointment for online office hours at least once during the course.
Students were also welcome to do a combination of both options (e.g. participate in the live seminar one week, participate asynchronously in another week).
(post continues under table)
Table 1: Types of course participation
Attend live seminars
View online lectures
or answer questions during lectures
or answer questions in discussion forums
with other students in breakout groups
seminar exercises in your own time
a leadership role in breakout groups (moderator, scribe, reporter)
Contribute to course Wiki
Contribute to group
Comment on blog posts
your blog during a seminar
Attend online office hours
Combining two forms of teaching and learning meant that we had to adjust our assessment strategy. The first thing we did was to reduce the number of paper assignments from two to one. Instead of writing a short paper at mid-term, we now asked students to write a blog around a recent socio-economic crisis measure taken in response to the Covid-19 crisis in a country of their choice. Students were encouraged to write about the same crisis measure in their final paper at the end of the course. Blogs were posted on Blackboard for other students to read and comment on. Synchronous learners also had the option to present their blogs during a live seminar.
A second major change was to add a participation grade to the course. This might seem counterintuitive, considering our intention to accommodate students who were unable to participate extensively in the course. Yet, we felt it important to both incentivize and reward any form of participation that students were able to put into the course. This meant, for instance, that we considered attending the live seminars of equal weight to commenting on a blog post. We made sure that students could earn a passing grade, even with minimal participation (see our rubric here). This way, we hoped to incentivize engaged participation in the course, while simultaneously recognizing that not all students could participate to the same extent as in a face-to-face course.
A final consideration was how to keep tabs on students, who would participate asynchronously in our course. We knew that students sometimes feel uncomfortable reaching out, if they need help. With our asynchronous participants, we would not actually have the types of interactions that would help us assess whether or not they struggled with the course. We therefore asked students to send us any exercises they did in their own time, which we would briefly check. We also strongly encouraged them to make appointments for online office hours. To further incentivize this, we included office hours appointments in our participation rubric. This ensured that, by the end of the course, we had spoken to each asynchronous learner at least once.
Up next: read how our lockdown-proof course design worked out in practice.
I am looking for a postdoctoral researcher to join the NORFACE-funded project “Democratic Governance of Funded Pension Schemes.” Applications can be submitted until June 23 via the Leiden University website.
Post-Doctoral researcher for the NORFACE project “Democratic Governance of Funded Pension Schemes
Description of the vacancy
Leiden University’s Institute of Public Administration is looking for a post-doctoral researcher to join the research project “Democratic Governance of Funded Pension Schemes” (DEEPEN) for a period of three years at 1.0 FTE. This position is made possible by a grant from the New Opportunities for Research Funding Agency Cooperation in Europe (NORFACE) Network. The project explores the democratic governance of capital-funded occupational pension schemes and investigates how governments, regulators and labor market actors govern funded pensions and whether participants are satisfied with pension fund performance. The project focuses on Denmark, the Netherlands, Germany, Austria, Ireland and Spain. The project combines quantitative analysis of survey data with comparative case studies based on elite and expert interviews and analysis of primary and secondary documents.
The postdoc will be part of the research team at Leiden University, led by Dr. Natascha van der Zwan. Other research teams are based in Austria, Ireland and Spain. The postdoc will conduct case studies of selected occupational pension schemes in the Netherlands to investigate the decision-making processes that link welfare provisions to financial markets. The postdoc will also contribute to comparative research on the regulatory context of occupational pensions in the project countries.
Conduct independent and collaborative research on the democratic governance of capital-funded occupational pension schemes;
Conduct elite and expert interviews in the Netherlands to gather information on the individual cases selected for analysis;
Collect and analyse information on the regulatory context of occupational pension provisions in the Netherlands;
Disseminate the project’s findings as co-author in high-ranking international peer-reviewed journals, conference presentations, policy reports and other relevant formats;
Collaborate in presenting the project to key stakeholders, academics and policymakers/practitioners working on occupational pensions.
Candidates hold a PhD in political science, sociology, economics, organization studies, geography or another relevant discipline;
Candidates have a strong command of qualitative research methods;
Candidates have a promising publication record that includes publications in international refereed journals;
Candidates have a proven ability to communicate research findings to non-academic audiences, e.g. through publications aimed at a broader audience, media work, etc;
In addition to proficiency in English, a good command of Dutch is considered an important asset. Project responsibilities include conducting elite and expert interviews in the Netherlands.
The Faculty of Governance and Global Affairs (FGGA) offers academic education in the field of Public Administration, Safety and Security, and International Relations, as well as in-depth post-academic programmes for professionals. In addition, the Faculty is also home to Leiden University College.
The Institute of Public Administration has an established international profile and has consistently received high ratings in peer reviews of both its teaching and research programs. The Institute offers a Dutch-language Bachelor program with two tracks, a Dutch-language Master Program in Public Sector Management and an English-language Master programs in ‘Public Administration.’
The position starts between 1 October 2020 and 1 December 2020. The appointment will be made initially on a one-year full-time basis, with an extension to a total of three years after positive evaluation. The appointment will be under the terms of the Collective Labour Agreement (CAO) of Dutch Universities. The starting salary, depending on qualifications and experience, varies from € 2.709,-to € 4.274 gross per month (pay scale 10) in accordance with the Collective Labour Agreement for Dutch Universities.
Leiden University offers an attractive benefits package with additional holiday (8%) and end-of-year bonuses (8.3 %). Our individual choices model gives you some freedom to assemble your own set of terms and conditions. For international spouses we have set up a dual career programme. Candidates from outside the Netherlands may be eligible for a substantial tax break.
The corona crisis is posing unique challenges to teachers and students as traditional courses are redesigned for online teaching. Some students lack the time and resources to participate synchronously (e.g. attend live seminars), while others prefer the structure and sense of community that synchronous teaching brings. To make our course inclusive of both groups of students, my co-teacher Janna Goijaerts and I have chosen to combine synchronous and asynchronous forms of participation. So far, we have found that this combination helps students stay engaged and connected, even when at a physical distance from us and from each other.
On Wednesday, May 20, we will be hosting a webinar on how to improve student engagement through sychronous and asynchronous teaching tools. The webinar is organized by the Center for Innovation and the ICLON at Leiden University.
I am thrilled to announce that the NORFACE network is funding our project “Democratic Governance of Funded Pension Schemes” (DEEPEN).
DEEPEN explores the democratic governance of capital-funded occupational pension schemes. We adopt Scharpf’s distinction between input legitimacy (are collectively binding decisions in line with citizens’ democratically expressed preferences?) and output legitimacy (do collectively binding decisions serve the common interests of the citizens?) to investigate how governments, regulators and labour market actors govern funded pensions (input legitimacy) and whether participants are satisfied with pension fund performance (output legitimacy). The project focuses on Denmark, the Netherlands, Germany, Austria, Ireland and Spain because the structure of funded pension provision varies along key dimensions relevant to input and output legitimacy.
The project combines quantitative analysis of survey data with comparative case studies based on elite and expert interviews and analysis of primary and secondary documents. Four work packages investigate the following research questions: How does national policy define participant influence on funded pension provision? How do stakeholders use pension fund governance to influence investment policy? How have capital-funded pension schemes performed in terms of pension outcomes across European welfare states? To what extent are individual attitudes on pension investment aligned with these inputs and outputs?
The project team includes Karen Anderson (PI) from University College Dublin, Juan Fernandez from University Carlos III in Madrid and Tobias Wiss from the Johannes Keppler University Linz. We’ll be hiring post-doctoral researchers (Dublin, Leiden) and PhD students (Linz, Madrid) to join our project team.
It’s been a week since many of us made the pivot to online teaching. Since then, I have been figuring out which form of online teaching (I outlined five options in my previous blog) would work best for my courses at the Institute of Public Administration at Leiden University. Because I teach small-scale seminar courses, I was particularly excited to learn that our university had acquired a new platform to teach interactive online classes (option 5 in my previous post). Having never used the platform, I figured this called for a practice session.
Preparing to mess up
Using the department’s app group, I asked which of my colleagues would be interested in joining a practice session with the new platform. Two days later, I found myself fumbling with slides, tools and chats in a newly assembled online classroom, while 30 colleagues from 3 different departments anda reporter for the university newspaper looked on. I had decided that I should focus my session on what would be most useful to the participants: a brief outline of how colleagues could use the new platform as teachers, while simultaneously having them experience it as students.
We kicked off with an icebreaker quiz consisting of a few silly questions, that allowed me to set the tone for the session: serious overall, but slightly giddy at times. I then continued with a brief lecture, using slides I had prepared earlier. The lecture covered what kind of preparations are necessary to organize an online seminar, how to build the online classroom, and how to lead the seminar. I interrupted the lecture with short interactions to highlight some of the features of the platform: one colleague made a drawing on one of my slides, while another responded to a question by using the digital hand raising tool. I also shared my desktop to show the contents of one of my browser tabs (it was a video of two swimming sea turtles, which I found relaxing to watch). Towards the end of the session, participants formed small breakout groups to think about how they could use this platform in their own classes. We ended with a group discussion, sharing the results from the breakout groups.
Most importantly, I prepared to mess up. Few of us will be able to smoothly run an online seminar under present circumstances and our teaching will involve a lot of trial and error. Why postpone the inevitable? When I set up the online classroom, I had noticed a few tools (video!) did not work for me. So I included these tools on purpose to try out during our trial run. And then, of course, lots of other things during the online session did not go exactly as planned. We were lucky to have our ICT & Education coordinator present to help out with problems, as they occurred in real time. Here’s what we learned:
Lessons from the practice session
1) Minimize multi-tasking. I love all the tools for communication and interaction that our online system offers: I can see the participants through their webcams, but they can also communicate via chat and by digitally raising their hands. When I was leading the practice session, however, I noticed it was impossible to keep an eye on all those tools at the same time. While giving my presentation, participants raised their hands to pose questions but they were outside my focal point on the screen and I did not notice. The same applied to the chat function. So when I teach my first online seminar with students, I’ll avoid multi-tasking by giving them clear instructions on when to communicate and how.
2) Take the student point of view. As the person leading the session, I did not see the same things as the participants. This may seem commonsense, but it is surprisingly easy to forget in an online classroom. In a face-to-face setting, we read our students’ body language to intuit how they are responding to our teaching. In an online setting, we cannot use our senses in the same way. At the beginning of my presentation, for instance, some participants experienced delays in the connection and were unable to follow me. From my end, everything seemed fine and I continued speaking. Only later did I see their chat messages notifying me of the connectivity issues. It made me realize I need to communicate to students beforehand how they can solve common problems rather than me trying to fix it for them (see also: avoid multi-tasking).
3) Assign roles. Another issue was about the roles we take on in the classrooms. Even colleagues, who are used to standing in front of a classroom, confessed to me they found it daunting to visibly participate in the online classroom. Participating online means that your face is projected onto everyone’s computer screen, which can make you uncomfortably self-aware. In the break-out groups, participants found it difficult to self-organize without a teacher present. So they resorted to silliness. When I visited these rooms, I found pictures of monkeys, games of tic-tac-toe and really anything but a serious discussion. In a face-to-face setting, I would notice this. Here, I had to enter each break-out room separately to check in and that took time. In future online seminars, I’ll therefore make sure to assign clear roles (e.g. moderator, note-taker, reporter) beforehand, so students know what to do. And perhaps accept that an occasional game of tic-tac-toe won’t hurt anybody…
4) Share knowledge. Immediately after our practice session, I created a Google Doc to share with my colleagues. In the Google Doc, we write down tips and tricks for using the online classroom. We cover things that are not part of the technical instructions, but rather focus on the use of the online classroom in real life. One of the participants observed, for instance, that the screen briefly turns black, when the teacher activates a new tool. She had initially mistaken this for a connectivity issue and had logged off, but later realized it was a quirk of the platform. The solution to the problem was to simply wait it out. We also discovered we could avoid awkward silences, if participants were in charge of turning their microphones on or off. When I did this for them, the platform took much longer to respond. Observations like these led us to the final lesson:
5) Write a protocol. When people signed up for the practice session, most of them did so out of a lack of familiarity with the technical features of the platform. They simply wanted to know how it worked and if they possessed the skills to use it. After our practice session, however, we realized that running an online classroom has only partially to do with mastering technical skills. It’s also – and perhaps even more so – about clearly communicating how we can all contribute to making the online classroom a success. One of the best suggestions coming out of this trial run was to create a protocol for students, detailing such things as how to communicate during the different segments of an online seminar (e.g. raise a hand or post a chat message), how to solve common problems with the platform, and when to adopt which role in the online classroom.
In the past week, the online classroom has become a space for us as colleagues to come together and reflect on how we can collectively manage the pivot to online teaching. Looking back at our impromptu practice session, I feel more confident in being able to handle the uncertainties of the next few months, at least when it comes to my teaching. I hope you will as well.
On February 24, 2020, Dr. Andrei Poama (Institute of Public Administration, Leiden University) visited our course on Research Methods (Master of Public Administration) for a guest talk on research ethics. Dr. Poama is a well-known expert on the ethics of criminal justice. He is also a member of the Ethics Committee at the Faculty of Governance and Global Affairs. In our conversation with Dr. Poama, we discussed the ethical dilemmas confronting researchers in the social sciences, possible solutions to these dilemmas, and how the codes of conduct for Dutch researchers apply to graduate students. This is Part One of two blog posts, in which we present the highlights from our conversation.
Please note: The guest talk has been modified to a question-and-answer format for easier reading. The spoken words have been edited for length and readability.*
Q: Andrei, you started our conversation today by asking students to what extent they thought of themselves as doing scientific research, on a scale from 0 to 7. Most students here gave themselves a 4, stating that they didn’t feel as if they were doing real research. They mentioned various reasons for this, for instance because they studied a limited number of cases or because they experienced a lack of data availability. This outcome surprised me, because these issues struck me as being normal parts of the research process, regardless of who the researcher is. I myself tend to view my thesis supervisees as actual researchers and I expect them to behave as such.
A: For me, it depends on the motivation that the student has. In terms of expectations, I would say something like a 6 or a 7. I don’t really draw any kind of distinction between what [students] are supposed to be doing and what we are doing. The questions that [students] have – problems with your database, you don’t find [particular sources], you don’t have enough material or data – those are recurring questions for actual researchers as well.
I think it’s important [for students] to understand that you’re not doing this other thing. It’s not about doing your homework or writing an essay for class or something, you’re actually producing knowledge. You are supposed to be doing research, even if you’re not in the classroom. And that’s the way I would see it. But, of course, [as students] you don’t legally fall under the standards from the codes of research ethics. So, if something goes wrong – plagiarism aside – then you’re not going to be sanctioned.
Q: So what do you tell your own supervisees, when they don’t see themselves as researchers?
One thing that I’m doing for the first time in my thesis capstone this year is a small workshop, where we just meet and everyone reads each other’s two-pager with a research question, the hypothesis and broadly the literature. Then we’re supposed to briefly give feedback to each other. And the reason I introduced that [element of peer review] was that I really did feel there was this kind of student-teacher relationship in some of the supervision processes. It really depends on the student. So you have very proactive students, who just draw on the literature. They have an idea, they have a method and they just go for it. Then there are students, who think they are doing their homework.
I guess what I’m going for is that when you’re going to do [research] and write your thesis, but also in other courses, you should be thinking about it as the real thing and not some kind of second-rate task.
Q: Andrei, you showed us a short video about a documentary called Three identical strangers (see trailer below). Can you tell us a little about what we just saw?
A: So yes, this happened in the 1960s. There were a series of so-called twin experiments. Sometimes they involved twins, sometimes they were triplets. This happened throughout the 1960s, when the ‘nature versus nurture’ hypothesis was at its peak. What happened was that an adoption agency – the Louise Wise adoption agency in New York – started a collaborative project with a couple of psychologists and psychiatrists at New York University (Peter Neubauer and Viola Bernard), who wanted to test the ‘nature versus nurture’ hypothesis.
So in this particular case, the mom had died at birth after having quadruplets. One of the kids had also died at birth, so then they were triplets and they were put up for adoption. Now, without telling the [adoptive] parents, one child was given to a blue-collar family, one to a middle-class family and one to an affluent family. Then the adoption agency, in collaboration with the researchers would just call up the adoptive family on regular meetings for check-ups, to see whether the kid was doing all right. But what they actually were doing is that they were measuring each of the triplets on these dimensions to be able to compare them.
Q: Was this a typical way of conducting these twin studies?
There were many other twin studies in the 1960s and the 1970s. But most of them were observational, in the sense that you had [children] who were up for adoption and then the [researchers] did a series of observations, interviews, personality tests and so on to see if being raised in a particular family with a particular socio-economic background made for different personality traits. This is one of the few actual experiments. So what happened in the 80s and up until the 90s, as you see in the trailer, is that it just blew up, because [the triplets] realized what had happened and that obviously became a big scandal.
And, you know, in a sense it was a happy moment for them, to find each other. But they also struggled with depression, identity problems and so on. What happened in the end is that one of the three guys committed suicide. So the question I’m posing here is what do you think is the problem with this? After all, you know, these kids had families that cared for them.
Q: Purely from a gut feeling response to seeing this video, I’d say this constitutes a very straightforward breach of research ethics. But you seem to hint that it’s more complicated.
A: You know, from the researchers’ perspective, everything was almost spot on. It was single-blind. You know the research design, research methods… perfect or almost perfect. So how is this worse than from, for instance, experiments in social psychology, where you go to a lab? Deception happens all the time, you’re being debriefed at the end of it. There is a sense [in this case], that the experiment was not over, when [the triplets] found out about it. You could imagine that they would have been debriefed after 20-30 years, because the experiment is a long one. So, we use deception all the time in experiments. This is different. So the question is: why is it different?
Q: One of our students said that this is different, because you are changing the lives that these people could have had. There is a status quo: there are triplets. You split them up and there can never be this status quo again.
A: Yes, it troubling, even the idea of watching it. There is something voyeuristic about it. It’s not like physical suffering, as in the case of a medical experiment. There were no substances involved. But there is a sense in which stopping the experiment, disclosing the experiment to the participants, is what is doing the harm. Because imagine that they would have never found out. All the depression and so on actually kicked in, when they found out what was happening. So what I find very troubling about this is that the bind of the blind experiment, if I may put it like this, is complete in a way. Once the experiment has started, there is a sense in which you can’t make it right again. All the options that you have at your disposal are wrong in some dimension. There is a sense in which minimizing the harm that it can do to participants in social science studies, especially when it comes to psychology, means that you try to keep the degree to which the participants take part in the experiment localized and limited. So don’t involve all of the person’s life into the project.
Q: What about studies that are not experiments? I think many of our students would instead do interviews or conduct surveys. How do research ethics apply to these kinds of studies?
A: There are many ways in which ethics is typically involved in our research activities. We have all these codes that we’re supposed to be reading and be aware of. And when we conduct research that actively involves human and non-human subjects, then we fill in these forms and we send them to the Ethics Committee. I’m also a member of the Ethics Committee of the Faculty of Governance and Global Affairs. And then the ethics committee looks at the project and the informing consent documents, using the codes of conduct as a standard. Then they say: well, you have a problem with your informed consent form. Or: everything is perfect and you’re good to go. These documents also apply to you as students. You are not on the payroll of the faculty, but you still count as individual researchers. So if you take anything out of [our conversation] take this idea, that you are actually doing research. You are not a student writing something for a course. You are a graduate student, who is also a researcher.
Stay tuned for Part 2…
*With thanks to Brecht, Edo, Meike-Yang and Nev for their insightful comments and questions.