As of January 2021, I will be joining the Editorial Board of Socio-Economic Review. I am very honored to be able to contribute to a journal that has been so important to my career!
Socio-Economic Review is one of the top journals in political economy and economic sociology. It’s currently ranked 11/180 for Political Science and 6/150 for Sociology, with an impact factor of 3.774 (2019).
Deze week verscheen mijn artikel “Van aandeelhouderskapitalisme naar neushoornobligaties” in de nieuwste editie van De Helling. In dit artikel bespreek ik het fenomeen financialisering en hoe zich dit manifesteert in onze verzorgingsstaat, de winkelstraat en in de natuur.
“De financiële wereld heeft inmiddels alle facetten van ons dagelijks leven in zijn greep: van boodschappen tot biodiversiteit. Deze zogeheten financialisering zorgt voor ongelijkheid, een schuldenberg en werknemers die niet als mensen maar als kapitaal worden gezien. Het financiële stelsel moet dan ook aan de ene kant eerlijker en socialer worden, en aan de andere kant moeten we onze afhankelijkheid van finance verminderen.“
I am happy to have been nominated as a candidate for the SASE Executive Council. The election for the Executive Council is currently underway.
A SASE member of more than 10 years, I am excited about the opportunity to join the Executive Council. My own interdisciplinary research interests make me particularly well-suited to represent colleagues working on new approaches to economic sociology and political economy, including junior and emerging scholars. If elected, I look forward to supporting new collaborations between research networks. As our profession is rethinking its strong reliance on face-to-face meetings for scholarly exchanges, I’m also interested in developing new and inclusive ways in which our members can engage with or participate in SASE, including online. Finally, I would like to support ongoing efforts to greening the organization and exploring new ways for SASE to thrive in the coming years.
If you are a SASE member, you can vote for the Executive Council here.
I am thrilled to announce that the Hans-Böckler-Stiftung has decided to fund our research project “Sustainability through company pension schemes? The influence of codetermination actors on investment strategies.” The funding will allow Karen Anderson, Tobias Wiss, myself and two postdoc to study this important topic for the next two years. Read more about the project below.
Sustainability through occupational pension schemes? The influence of codetermination actors on investment strategies
How can codetermination actors influence sustainable investments in occupational pension schemes? Considering this topic from an international comparative perspective is of crucial importance in view of the enormous increase in investments of capital-based pensions in EU countries. However, there is a lack of studies that investigate to what extent actors of co-determination influence investment decisions and use them for non-financial purposes. Investing in sustainable activities – i.e. taking ecological, social and corporate governance aspects into account – can be a means of realizing the union goal of a sustainable and just society. It is also important to clarify which standards of social-ecological investments exist and who regulates them and how.
The planned project therefore analyzes a) what influence actors of co-determination have on investments in occupational pension schemes, b) to what extent they take into account socio-ecological aspects and c) whether the preferences expressed at the beginning of the investment chain match the final investment. Answers to this are particularly relevant for Germany in view of the increasing importance of occupational pension schemes. The comparison with the Netherlands and Denmark – both countries with quasi-universal occupational pensions, strong trade union influence and substantial sustainable investments – provides important insights and recommendations for political actors in Germany.
We’re continuing our conversation on research ethics with Dr. Andrei Poama (Institute of Public Administration, Leiden University). Dr. Poama is an expert on the ethics of criminal justice. He is also a member of the Ethics Committee at the Faculty of Governance and Global Affairs. In our exchange with Dr. Poama, we discussed the ethical dilemmas confronting researchers in the social sciences, possible solutions to these dilemmas, and how the codes of conduct for Dutch researchers apply to graduate students. This is Part Two of two blog posts, in which we present the highlights from our conversation.
Please note: The guest talk has been modified to a question-and-answer format for easier reading. The spoken words have been edited for length and readability.*
A: The Code of Conduct for Research Integrity is the main document, which I think is quite clear and well done. It draws on the European equivalent. But the process is quite weird. So, when you open these codes, you see “oh, there are five leading principles for conducting ethical research.” And these principles are honesty, scrupulousness, transparency, independence and responsibility. And it’s like like “oh!”. You don’t really know where they are coming from and so on.
I’m not going to go through them, but I want you to know what – based on those principles – would count as research misconduct. One of them you already know and you have known about it since you were an undergrad. That’s plagiarism. But then the other two are fabrication and falsification. Fabrication is simply frauding, so fraudulent science, making up data. Falsification is when you have the data but you keep tweaking it, until the data says what you wanted to say. I think many – I wouldn’t say most – but many scientists, especially in the quantitative tradition, engage not necessarily in full falsification, but they keep chasing that p value: rearranging the data and massaging it, until they get statistical relevance. There are very interesting discussions about dropping the statistical significance level. Because the reason why we have the p value today is that you want genuine findings to be published. Now, what has happened because you have this standard [the p value] is that people keep chasing the standard and modifying the data, until the data fits the standard. But by doing so, they modify the data so much, that actually they end up falsifying it, at least to some extent.
Q: So what kind of solutions to these problems are there?
A: There are two things that are very interesting happening today and that are addressing the falsification problem. One of these things is pre-registration. Pre-registration means that you have these online platforms, typically hosted by universities. So, for instance, my colleague Dr. Honorata Mazepus and I have been doing a survey experiment on how the socio-economic status of criminal offenders (whether they are poor or not) affects people’s judgments about whether to blame and punish the offenders. Before you even run the experiment, you go [to the platform] and you submit a document with your hypothesis and with your theory. Then you’re committed to those hypotheses, before you run the experiment. And it’s a requirement, when you submit the findings of the experiment, that you also submit the link with your pre-registered hypotheses […]. And the other thing is the discussion about dropping the p value.
There is novelty or positive findings bias in the way science, and in particular social science but also in medical science, as it is practiced today. Your stuff is only going to get published, if you find something. If you manage to some degree to confirm the hypothesis you’re after. If you found no relationship, so the null hypothesis holds, then no one is interested. One thing happening right now is that you have a few null hypothesis or negative findings journals […].
So that’s also interesting and it’s being debated in the replication crisis that you might have heard of, especially in social psychology. You also find it in management studies. Basically, only about 30-40% of social psychology studies are being replicated. So the remaining 60-70 % is not being replicated.
Q: I think most of our students would probably either do a non-experimental survey or qualitative interviews rather than an experiment. So how would falsification play into [those methods]?
A: You can falsify anything, really. You could also falsify the findings of a survey. There are different ways of falsification. You can say, for instance, my sample is the students in the Masters or the undergraduate [program], but then you also make your Qualtrics survey link available to family and friends. So you might have people who are not part of the sample or population that will be part of [it], but you just don’t say it. And there are ways of checking that, but I don’t think [anyone] is realistically going to do that.
Q: What I find a little bit tricky with interviews is, when you want to use quotes from the interview and you have to polish the language a little bit, because you’re not going to type in all the ‘uhs’ and the ‘ahs’. So you would have to tweak it a little bit. I’m wondering what the fine line is between this acceptable editing on the one hand and then falsification on the other hand.
A: I mean, it’s hard to say. I think if you change the meaning, then then you’re clearly in the wrong. The way it happens with research misconduct, for example, is that there are always clearly cases of wrongdoing. So, for instance, this case would be a clear case of falsification. And there are of course very clear cases, where you would just simply report the data and the data is of high quality. There is no interview or no open-ended question, so then you don’t have to engage too much with the interpretation. There are other cases in-between. My metric there is that if you see yourself interpreting the data in a way that tends to confirm your hypothesis – if you’re too friendly to your hypothesis, to put it that way – then that’s a red light. You should try to be as uncharitable to your hypothesis as possible. Your job is to try to falsify the hypothesis.
Q: We have a question from a student, who is doing research on co-production. This involves sitting in on meetings with citizens. So how do you prevent, that you falsify or misinterpret the observations you make?
A: You can have a reflection on your interpretation, so you get at this meta-level where you basically say “well, this is what I’ve been doing and these are the weaknesses of my interpretation and these are the things that I’m not sure about.” Another thing that you should be doing is that you write after you have your observation moment. If you postpone it and do it two days later, you’ll have all sorts of memory deception effects that will be kicking in. That will be a problem. So just do it afterwards.
Q: A lot of the examples of ethical breaches are really big breaches. I think for a lot of us, who are trying to do our research as ethically as possible, these big examples are not always that useful. Because, you know, we are not going to fake respondents. But sometimes those smaller dilemmas are actually the most difficult ones. You’re on the Ethics Committee in our faculty, so I was wondering what are some of the most common issues that you observe and that we could learn from?
A: Well, they often have this kind of structure (see slide below). So this is a made-up example, which is partly based on my supervision experiences. You know, students would do stuff similar to this. Imagine that one of your colleagues wants to test five hypotheses about the impact of socio-economic inequalities on educational opportunities in the Netherlands. To test these hypotheses, he plans to interview three teachers from a low-income neighborhood in The Hague. He comes to you to ask for some research advice about how to proceed with the study. The question is, what do you advise him? Is there an ethical problem with his research?
Q: In the discussion, our students quickly noted some issues with this research design. The researcher only selects a low-income neighborhood as his case study, instead of selecting multiple neighborhood that vary in terms of average income levels. This constitutes selection bias. The researcher also aims to test five hypotheses, based on only three interviews. This is known as the ‘degrees of freedom’ problem. But these seem issues of research design, not of research ethics per se. So why should we question this study in terms of research ethics?
A: Many of the cases that individual researchers submit with us [the FGGA Ethics Committee] are like this, because of our The Hague mission […]. Put yourself in the shoes of either the teacher or the students [in the low-income neighborhood], who have given these interviews. Then, you know, the researcher is sending you the article and says “oh look, here are the findings.” So what does that does that do to you as a teacher or as a student?
Many of the problems that we receive on the Ethics Committee have this kind of structure. You know, we want to draw very general conclusions based on a very small sample, because we don’t have a lot of data. But I think that one thing that you can do as individual researchers is to be very critical about the scope and range of your conclusions. Be very critical about the fact that this is not going to apply to the whole population. If you, as a part of this population, are reading about this research, especially as a teacher, I would imagine that basically in encounters with other teachers you will feel lower, less important, kind of responsible for this happening. And so I think one of the frequent problems that we do have is this stigmatization effect or potential for stigmatization.
Q: So what can you do?
A: I think there are two things that we could also do as individuals. One is to take charge of the science communication process […]. It’s often not the scientists themselves or the researchers themselves who are doing the communication about the findings (unless you’re talking about Twitter or Facebook), it’s someone else. And I think one thing that we could do is to take hold of the communication process. So to actually present that data ourselves, because we have more nuance in the way which we present. Of course, to do that, we would need more time, and time is very scarce in academia today.
And the second thing is… I will just give this example. I have my second Masters in criminology and I had this amazing teacher who was doing participant observation on offenders convicted for domestic violence charges. The way she was doing it was through interviews and just sitting there and observing things. And then she wrote her article thing and used, you know, fancy nice academic language. But then before actually sending the article for publication, she did one very interesting thing: she took the manuscript and sent it back to the prisoners. So when the article got published, the title was “being a nosy bloody cow”, because that was the reaction of one of the inmates.
So one thing that you can do, especially if you see there is a potential for stigmatization, is to promise to give voice to your participants. And if you’re real about giving voice, you can do that in the actual content of your research products.
Q: To sum up?
A: So two things. One, you are not a student, you are the actual author of the research that you’re going to produce. And the participants that you’re going to work with are in some sense co-authors of that work. So don’t be shy about what you did. And two, if you already have a draft, you can send it back to the participants, to the people who have generated knowledge for you. Do it, especially if you are doing qualitative research, because that is a way of giving people some control over what you’re going to say about them.
There is nothing fixed about those five principles. Principles don’t apply in an obvious way across cases. Even between the principle and the case, there is this thing called judgment. You have to exert your judgment about a) whether the principle applies at all and b) how and to what extent the principle is going to apply. So one obvious principle that would apply [in the earlier example] is scrupulousness, that you show care in the way that you produce knowledge, gather knowledge and disseminate it.
Q: Thank you, Andrei, for sharing your thoughts on research ethics with us. And thank you to our students for their insightful questions!
*With thanks to Brecht, Edo, Meike-Yang and Nev for their insightful comments and questions.
When my co-teacher Janna and I set out to redesign our normally face-to-face course to accommodate the pivot to online learning this past semester, we were not sure what to do. The Covid-19 lockdown seemed to call for an altogether new approach to online teaching. In three blogs posts, we’ll describe how we revised our course design, the practicalities of lockdown teaching, and why our students called our course “the gold standard of online teaching” by the end of the semester.
Part 2: The practicalities of lockdown teaching
In Part 1 of this short series, I outlined our approach to course design, which combined synchronous and asynchronous forms of learning. Our aim in the course was to create an inclusive learning environment for those students able to attend our weekly online seminars as well as those students who followed the course asynchronously. In this post, I will address how we put our initial ideas into practice. In short, we found out that three things proved to be particularly important when teaching online during a lockdown:
Take the small talk seriously: making space in our course for chitchat and non-teaching related banter helped create an online community between us and our students. It made students more at ease, when participating in the online chats and breakout sessions. They also indicated feeling more comfortable signaling to us, when they were struggling with the course.
Make connections between synchronous and asynchronous learners: having to take a course remotely is difficult enough, let alone doing mostly on your own. We wanted to make sure that asynchronous learners did not feel as if they were excluded from what was going on in the online seminars. We made use of the interactive features on the course management page (discussions, blog posts, Wikis) and created joint exercises for synchronous and asynchronous learners to overcome this obstacle.
Make sure to check in: in our department, few students make use of office hours. We therefore feared that remote learners might not contact us, when struggling with the course. Our solution was to make attending our office hours part of the participation grade. This way, we gave a strong signal that attending office hours was expected from students. It helped us give extra attention to students who needed it.
Running the live seminars
Each week, we would meet our students for three hours during an online seminar. The seminars took place in a Kaltura Live Room, the online teaching platform acquired by our university. The Live Room made it possible for us to show slides, use a whiteboard, share our screen, have students work in break-out groups, and several other things that helped approximate a face-to-face classroom setting. Managing multiple functionalities at once proved difficult. Since we were co-teaching, one of us would lecture or lead discussion with the students, while the other person would monitor the chat or activate tools when needed.
We made sure to start each seminar with some small talk, with topics ranging from Netflix recommendations to the small joys of freshly baked pastries and park picnics during lockdown. Small talk proved to be important for our seminars for several reasons: it introduced a semblance of normal social interactions in our course; it opened the discussions in the chat, making students more comfortable to contribute; and it allowed us to do a quick check before each seminar to see how everyone was doing.
It is important to note that we did not shy away from sharing our own experiences with the students. After one of us had a bad day, about half-way into the course and into the lockdown, and expressed as much during the small talk, several students expressed feeling more comfortable admitting that they were struggling as well. In hindsight, this became one of the most appreciated features of our course (see also below on course evaluations).
Our seminars then followed a standard structure. Having three hours at our disposal, we would dedicate the first hour to a short lecture. One of us would talk, supported by slides and other visual aids. The other would monitor the chat. We made sure to make the lecture interactive by including brief surveys, pose questions for students to answer in the chat, or share links to additional online resources. The lecture would end with a short assignment, related to the week’s lecture topic. During the second hour, students worked together in break-out groups to do the assignment. While the assignment would rarely take a full hour to complete, we wanted students to have enough time to take breaks and to chat amongst themselves. For this reason, we did not enter the break-out groups, unless invited by the students (for instance, when they had a question). The third hour then was dedicated to presentations: the various groups would report back on their completed assignments and some students would present their blogs. We would end each seminar with a general discussion, to which students could contribute via webcam or chat.
For the asynchronous learners, we recorded the lecture component of each seminar. Break-out groups and class discussions were not recorded. We feared that students present in the online classroom would be more reluctant to actively participate, if their comments and remarks were ‘on-the-record’. After each seminar, we would post the lecture video on our learning management system (Blackboard).
We added also several features on our learning management system that would help asynchronous learners understand the learning materials and keep engaged with the course. First, we created several discussion threads, where students could pose questions. One thread was dedicated solely to organizational matters to the course; others were structured around each course week and invited questions of a substantive nature. Second, we created a glossary of difficult terms and concepts from the course readings, for which we used the Wiki function in our learning management system. Students were asked to post any terms they were struggling with or to post definitions of listed concepts that they already knew. Finally, we posted students’ blogs on the course page and asked students to use the comments function to ask questions or provide feedback on the blogs.
While we designed our course page on the learning management system predominantly with the asynchronous learners in mind, we were pleasantly surprised to see it helped forge connections between synchronous and asynchronous learners in our course: students answered each other’s questions in the online forums and they engaged in lengthy discussions around the blogs, sometimes over several weeks. To a large degree, these interactions were unforeseen. While we had aimed to incentivize students to interact with each other by giving them a participation grade (weighed at 20% of the final grade), our students had initially misunderstood our instructions to mean they were assessed either on synchronous learning activities or on asynchronous learning activities. When synchronous learners used the interactive features on the learning management system, they told us they did so for their own enjoyment of communicating with other students.
One of the mechanisms at our disposal were the exercises that we gave students in the online seminars to work on in breakout. We would distribute the same exercises to the asynchronous learners, who would e-mail us their completed work. The exercises always involved a small research task, that helped connect the themes from the course readings to current events. To give an example: in our week on corporate social responsibility, students explored public corporations’ charitable giving and other responses to the corona virus pandemic and compared these against the measures taken to benefit the corporations’ shareholders. During the live session, each breakout group had done research on some of the world’s largest firms. We collected the results in a shared Google Drive file, to which asynchronous learners would add the findings from their own self-study efforts. The result was a collectively assembled dataset. Curious about other exercises? Click here.
Finally, we wanted to create a welcoming environment for students to interact with us, the course instructors. Again, we predominantly had asynchronous learners in mind. Since we would not meet our students in person for the duration of the course, we were afraid that we would not be able to find out, when students struggled with their coursework during these strange times. We therefore included attending online office hours in our participation rubric, hoping to incentivize students to reach out to us. This worked out as expected: over the course of seven weeks, we spoke with almost all asynchronous learners in a one-on-one setting. While most conversations initially covered assignments or other substantive questions related to the course, they also provided an opening to talk about the – sometimes very serious – situations in which our students found themselves during the lockdown. In some cases, we were able to direct students to support services provided by our university; in other cases, we simply offered a listening ear. All in all, our office hours resulted in very meaningful conversations with our students, that we may not have had under normal circumstances.
Up next: how students experienced our online course
When my co-teacher and I set out to redesign our normally face-to-face course to accommodate the pivot to online learning this past semester, we were not sure what to do. The Covid-19 lockdown seemed to call for an altogether new approach to online teaching. In three blogs posts, we’ll describe how we revised our course design, the practicalities of lockdown teaching, and why our students called our course “the gold standard of online teaching” by the end of the semester.
Part 1: Synchronous versus asynchronous learning – Why choose?
The graduate-level course Markets in the Welfare State is generally the highlight of my teaching year. It’s an elective on the topic of my research, meaning it’s that rare treat of a course in which I get to teach the topics I enjoy the most to students with a strong interest in the subject matter. This year, I was joined by Janna Goijaerts, PhD student and teacher-in-training at Leiden University.
The course started only a few weeks after our university had made the pivot to online teaching due to the corona virus pandemic. This meant that Janna and I had to radically reverse our standard course design. Like so many university teachers, we struggled with the choice between synchronous and asynchronous teaching. While educators on social media seemed to strongly prefer either one or the other, we were not so sure.
Our compromise was a course design that allowed students to do both: attend weekly online seminars or follow the course in their own time via the learning management system. We made sure to incorporate various interactive features. It worked. Both student performance and evaluation scores were up from the regular edition of the course. One student even deemed our course “the gold standard of online teaching.”
Hyperbole aside, we believe that we found a way to make online teaching enjoyable for both students and teachers who are largely used to face-to-face teaching, while not sacrificing performance. In the following three posts, we will therefore outline the main elements of our course design, describe how we ran our course, and report back on how students experienced our course.
We hope that our experience may be useful to other teachers, who like us are at the start of another semester of online teaching.
Reconsidering our course design
The pivot to online teaching proved to be more consequential to our course than ‘simply’ transforming a face-to-face course to an online setting. While the course normally attracts only around 15 students or so, this year more than 40 had signed up. This meant that Janna and I had to reverse our envisioned course design, once it became clear that social distancing would prevent any regular teaching. We had to rethink several elements of the course, such as our normal reliance on small group discussions and the substantial number of written assignments.
Like so many university teachers, we struggled with the choice between synchronous and asynchronous teaching. Synchronous learning refers to a situation, whereby students learn at the same time. Asynchronous learning occurs, when students learn at different times. Both terms refer to online education, whereby students are not in the same location when they learn. While many educators on social media seemed to strongly prefer either one or the other, we were not so sure. We asked ourselves:
How do we create an inclusive learning environment that is attentive to the needs of students who may be prevented from participating synchronously in our course, while at the same time serving the needs of students who prefer the structure and sociality of synchronous learning?
Our students come from incredibly diverse backgrounds. While most students are of Dutch origins, around one-third are internationals from all over the world. The Dutch students typically stayed in their dorms or temporarily moved in with their parents during the lockdown. Many (but certainly not all) international students traveled home to be with their families. Some students found themselves isolated and by themselves during the lockdown, others combined their studies with jobs or family care and felt overwhelmed. This diversity means that any choice for either synchronous or asynchronous would necessarily exclude certain students from our course.
Two learning tracks
Having already taught courses online, I felt confident enough to answer our question with a “why not do both?” So we designed our course specifically with two forms of participation in mind. The first option was to participate synchronously in the course. This meant attending weekly live seminars on Wednesday mornings between 9:15am and 12pm. The live seminars were the online equivalents of our regular face-to-face meetings. The live seminars consisted of short lectures on the course readings, presentations of cases, short exercises, and group discussions. Attendance in the live seminars was not mandatory, but we did expect active participation during these sessions.
We did not use the full three hours for our live seminars. Instead, we divided the time available into three timeslots. In the first hour, we would present a lecture, supported by slides. During the second hour, students would meet in break-out groups to do a collaborative exercise. During this time, students were also free to schedule breaks as desired. In the third hour, we would meet back in the online classroom for a plenary discussion of the exercise.
The second option was to participate asynchronously in the course, by which students used the interactive features on the course management system in their own time. Instead of attending the live seminars, students could view the video recordings of the lecture component of the seminars. Students did individual versions of the seminar exercises in their own time. They could comment on blogs or contribute to the course Wiki. Students were also strongly encouraged to make an appointment for online office hours at least once during the course.
Students were also welcome to do a combination of both options (e.g. participate in the live seminar one week, participate asynchronously in another week).
(post continues under table)
Table 1: Types of course participation
Attend live seminars
View online lectures
or answer questions during lectures
or answer questions in discussion forums
with other students in breakout groups
seminar exercises in your own time
a leadership role in breakout groups (moderator, scribe, reporter)
Contribute to course Wiki
Contribute to group
Comment on blog posts
your blog during a seminar
Attend online office hours
Combining two forms of teaching and learning meant that we had to adjust our assessment strategy. The first thing we did was to reduce the number of paper assignments from two to one. Instead of writing a short paper at mid-term, we now asked students to write a blog around a recent socio-economic crisis measure taken in response to the Covid-19 crisis in a country of their choice. Students were encouraged to write about the same crisis measure in their final paper at the end of the course. Blogs were posted on Blackboard for other students to read and comment on. Synchronous learners also had the option to present their blogs during a live seminar.
A second major change was to add a participation grade to the course. This might seem counterintuitive, considering our intention to accommodate students who were unable to participate extensively in the course. Yet, we felt it important to both incentivize and reward any form of participation that students were able to put into the course. This meant, for instance, that we considered attending the live seminars of equal weight to commenting on a blog post. We made sure that students could earn a passing grade, even with minimal participation (see our rubric here). This way, we hoped to incentivize engaged participation in the course, while simultaneously recognizing that not all students could participate to the same extent as in a face-to-face course.
A final consideration was how to keep tabs on students, who would participate asynchronously in our course. We knew that students sometimes feel uncomfortable reaching out, if they need help. With our asynchronous participants, we would not actually have the types of interactions that would help us assess whether or not they struggled with the course. We therefore asked students to send us any exercises they did in their own time, which we would briefly check. We also strongly encouraged them to make appointments for online office hours. To further incentivize this, we included office hours appointments in our participation rubric. This ensured that, by the end of the course, we had spoken to each asynchronous learner at least once.
Up next: read how our lockdown-proof course design worked out in practice.
It’s the second and final day of our SASE Mini-conference on The Welfare State in Financial Times. Today, we start with a panel on financialization and state transformation (see details below). We conclude our mini-conference with a panel on financialization and the changing face of welfare.
Panel: Financialization and State Transformation (15:00-16:30 CET)
Panel: Financialization and the Changing Face of Welfare (18:00-19:30 CET)
Today, we’re kicking off our long-awaited SASE mini-conference on financialization and the welfare state. Organized by Jeanne Lazarus, Daniel Mertens and myself, the mini-conference aims to explore the complicated new ways in which social and financial policies have become entangled in contemporary welfare states. The contributions to the mini-conference map the ongoing financialization of the welfare state in contemporary political economies by focusing on the introduction and expansion of financial tools and mechanisms in public and private welfare provision. The contributions study how welfare states and other social groupings have debated and introduced new public policies and financial tools that promise to protect against growing financial risks in everyday life. Looking at these promises of protections through the market requires a fundamentally different understanding of the nature of the welfare state than the scholarship’s traditional focus on decommodification.
Even though we are not able to meet in person in Amsterdam this year, we have been able to put together an exciting online program covering multiple dimensions of financialization in the realm of social policy and the state. Today’s program includes a panel on the political economy of financial sector practices, regulation and macroeconomic functions; financialization and household debt; and financialization and pensions. For details, see below.
Panel “The Political Economy of Finance Sector Practices, Regulation and Macroeconomic Functions” (10:00-11:30 CET)
Panel “Financialization and Household Debt” (15:00-16:30 CET)
Panel “Financialization and Pensions” (18:00-19:30 CET)