I-DEEL: Inter-Disciplinary Ecology and Evolution Lab
  • Home
  • People
  • Research
  • Publications
  • Blog
  • Open Science
    • Registrations
    • Registered Reports
    • Published Protocols
    • Preprints
    • EDI
    • Other
  • Opportunities
  • Links

A visit to Canada

27/10/2025

0 Comments

 
by Coralie Williams

About a month ago I came back to Sydney after some time in Canada. Similar to Lorenzo, I visited Shinichi at the University of Alberta (UofA) for a few weeks working on my thesis. I had never been to Canada, so I was excited to immerse myself in the land of moose and maple syrup.
​
I really enjoyed the summertime in Edmonton and working on the UoA campus. It was great to have been able to visit and to see the new lab take shape, which I am sure will be very successful. The main highlights were having an office with a window (the Sydney office space is called the “dark side” for a reason), indulging in Tim Horton doughnuts every day, and trying poutine for the first time with Ayumi, which we both liked! After some focused work time, I was excited to explore Canada’s wilderness away from the city and set off with my partner for several days of travelling in the Yukon and Alberta.
UofA campus and the city of Edmonton.

​The first stop was the Yukon. It was everything I was waiting for: emptiness, autumn colours, wolves howling in the middle of the night, and beautiful scenery. During a stop in Haines in Alaska, we spotted a mum grizzly with her cubs. It was surreal, but also a bit sad to see how people were stupidly close just to get a picture.
Picture
Haines Junction in the Yukon.
Close to Chilkoot Lake in Alaska.
​

The second part of the trip was in the Rockies in Alberta. Driving from Edmonton to the Rockies is quite a contrast, from flat plains closer to Edmonton you arrive at massive mountainous rock formations. As Lorenzo mentioned in his blog, every turn on Highway 93 had a breathtaking view. At one corner, a large black bear walked right in front of our car before sitting down to eat berries and flowers by the roadside, with no care for us or the queue of cars building behind us.
Maligne lake and somewhere along AB-93
​​
Prior to going to Canada, doing my best to prepare like a true tourist, I read up on anything bear related and I read many times the saying "if it's brown lay down, if it's black fight back, if it's white goodnight". But I soon realised that black bears can have very different fur colours, and even grizzly (“brown”) bears can be quite blond. So, I guess that expression isn’t too helpful, and perhaps these common names should be revised. It was also cool to see the bear-safe bins, as I had read about how they were engineered to balance the human-bear trade-off.  
​
All in all, it was a special trip, and I feel very fortunate to have had the opportunity to visit UoA and these places. Saying goodbye to Shinichi and fellow lab members in Edmonton was bittersweet, but it’s a normal part of how things move on. Back in Sydney, there have also been many farewells over the past months, with colleagues wrapping up their contracts and starting new roles. I’ll be the last to finish up in the Sydney branch, so for now I’ll be wrapping up my thesis, appreciating these experiences, and getting ready for the next chapter.
0 Comments

Back to the root of meta-analysis: understanding sampling variance

30/9/2025

0 Comments

 
by Yefeng Yang

Meta-analysis has become a cornerstone of evidence synthesis across disciplines, from medical research to environmental science. Thanks to modern software, conducting a meta-analysis is more accessible than ever. However, as our recent work on the quality of meta-analyses of organochlorine pesticides highlights (https://www.nature.com/articles/s41893-025-01634-5), accessibility does not always translate to quality. A key reason is that practitioners often overlook the statistical theories underpinning meta-analysis. In a series of blog, I aim to demystify these foundational concepts, starting with a common topic: sampling variance.

​What is sampling variance?
Sampling variance is a fundamental concept in statistics, yet it is often misinterpreted in meta-analysis. When researchers talk about “sampling variance,” they usually mean the variance associated with effect size estimates from primary studies. However, the term is broader: sampling variance exists whenever we estimate any parameter; an effect size, an overall mean effect, a regression coefficient, or even a variance component such as the between-study variance (tau^2). In essence, sampling variance reflects the uncertainty that arises from random sampling. It quantifies how much an estimate would fluctuate if we were to repeat the study many times under identical conditions.
Picture
Source: made by the author in R 4.0.3
Sampling distribution and standard Error
To understand sampling variance, we first need to understand the sampling distribution. Imagine conducting a study with a sample size 𝑛 to estimate an effect size, such as the standardized mean difference (SMD) between treatment and control groups. If you could repeat this experiment many times (each time drawing a new random sample from the same population), you would obtain a distribution of SMD estimates. This is the sampling distribution of the SMD.

​The standard error (SE) is the standard deviation of this sampling distribution. It quantifies the precision of your estimate: smaller SEs imply greater precision. The sampling variance is simply the square of the standard error (SE^2). In practice, we rarely know the true effect, so we rely on the standard error (and thus sampling variance) to gauge how much our estimate might vary due to random sampling.

In meta-analysis, sampling variance arises in several contexts:
  (1) effect size estimates from primary studies,
  (2) overall mean effect from an intercept-only meta-analytic model,
  (3) regression coefficients from a meta-regression, and
  (4) variance components (e.g., tau^2, the between-study variance).
​
Deriving sampling variance: “ideal” vs. “practical” approaches
Ideally, to calculate the sampling variance of an effect size estimate, you would repeat a study with the same sample size (n) many times, compute the effect size each time, and then calculate the standard deviation of the resulting sampling distribution.

For example, to estimate the sampling variance of an SMD, you would:
   (i) Conduct the study multiple times with sample size (n).
   (ii) Compute the SMD for each study.
   (iii) Form the sampling distribution of SMDs.
   (iv) Calculate its standard deviation (the standard error) and square it to get the sampling variance.

Similarly, for the overall mean effect in a meta-analysis, you would:
   (i) Draw Randomly sample sets of studies many times.
   (ii) Fit an intercept-only meta-analytic model to each sample to estimate the overall mean effect.
   (iii) Form the sampling distribution of these mean effects.
   (iv) Calculate its standard deviation and square it.

This conceptual exercise shows what sampling variance means, but of course, repeating studies thousands of times is impractical.

From concept to calculation: statistical theory as a shortcut
Fortunately, statistical theory provides us with elegant shortcuts. Instead of repeated sampling, we rely on mathematical results that describe the expected variance of estimators under specific model assumptions. For individual effect sizes (e.g., log response ratios or Fisher’s z), Taylor series approximations can be used to derive analytical formulas for sampling variance. For overall effects and regression coefficients in meta-regression, the framework shifts to Weighted Least Squares (WLS) or, in random-effects settings, Generalized Least Squares (GLS) estimation.
Picture

​These results rest on classic statistical theorems like Gauss–Markov theorem and Minimum Variance Unbiased Estimator (MVUE). Thus, the familiar SE formulas in meta-analysis are not arbitrary; they are rooted in the same optimality principles that underpin regression theory. Understanding this connection not only clarifies what the standard error represents but also why it behaves as it does, linking the practical mechanics of meta-analysis back to its statistical roots.
0 Comments

Reflections on our time at SETAC AU 2025

31/8/2025

0 Comments

 
by Kyle Morrison

​Lorenzo and I had the great opportunity to return to the SETAC-AU (Australasian Society of Environmental Toxicology and Chemistry) which was in Wellington. For both of us, this was our first time in New Zealand and we were lucky to have an amazing weather during our visit (which we were told is a rare occurrence). 
 
During the conference, I presented a talk about my meta-analyses appraisal tool called MATES. I also had a poster about systematic evidence map of the past sixty years of organochlorine pesticide research. Lorenzo presented both a talk and a poster too – his talk was about the systematic map of reviews on PFAS effects on health  and his poster was about a meta-analysis on PFAS bioaccumulation through food webs.

I am pleased to say our presentations were a great success - I won best oral presentation award and Lorenzo won best poster award.
Picture
Photo of Lorenzo and myself at the SETAC conference with our awards!
One of the highlights for me was reconnecting with familiar faces from previous SETAC conferences and meeting new colleagues working across a wide spectrum of pollutants and approaches - ranging from lab toxicology and field ecology to modelling and policy translation. There is something special about seeing so many different perspectives converge on a shared goal to reduce pollution.
 
After the conference we took the opportunity to explore some of the local sights and enjoyed availing of the local cafes, restaurants and pubs. All of which had their own unique vibe and character which was super cool and interesting to see. Although the week flew by, we are left grateful and inspired. We are already looking forward to staying in touch with the SETAC community and to future opportunities - whether at the next conference or in collaborative projects that keep the momentum going.
0 Comments

Five weeks across western Canada

31/7/2025

0 Comments

 
by Lorenzo Ricolfi

I'm sitting here at the airport in Edmonton after five weeks in Canada, reflecting on everything I experienced and I felt the need to write it down. So here we are.
​
My first time in Canada went by in the blink of an eye. I spent the first three weeks working on my thesis and ongoing projects at the University of Alberta, where Shinichi and his new team kindly welcomed me. I had the chance to grab a couple of beers with Santi and Erick, and we ended up having some deep, fascinating conversations about the future of AI in academia and beyond, mixed in with the occasional lighter banter. Time was limited, but enough to realise they’re both great guys and brilliant scientists. Their minds are open and sharp, the way a scientist’s mind should be.

Edmonton is ok. It reminded me a lot of an average U.S. city: big parking lots, fast food chains everywhere, and footpaths clearly not designed for pedestrians. Still, there were highlights. One day, Shinichi, Yefeng, Toto, and I went for a hike in Elk Island, where we saw a couple of bison from a distance. Man, their heads are massive!
​
Picture
From the left: Shinichi, me, Yefeng, and Toto during the hike in Elk Island.
Picture
A bison minding its own business (look how big its head is!).

​​I also learned a lot about Edmonton thanks to a memorable chat with my Uber driver, Nwabueze. He’s a Nigerian guy, a little older than me, who picked me up from the airport. After the usual chit-chat, he told me how he left Nigeria six years ago, not because things were bad, but because he wanted to open his mind and challenge himself. Among many interesting stories, he explained to me how incredibly cold it gets here in winter, how diesel fuel requires antifreeze additives during the colder months, and why so many windshields are cracked (spoiler: they put rocks on the roads to improve traction on ice). “It’s tough during winter,” he said. I believe him.

The University of Alberta was good, although very quiet. After those three weeks of work, my girlfriend and I set off on a road trip from Edmonton to the Rockies (Jasper and Banff NPs), down to Vancouver and Vancouver Island, and then back. About 4,500 km through the wilderness of western Canada.

The Rockies are absolutely stunning. Nature, landscapes, and alpine lakes that really take your breath away. Sadly, a large portion of Jasper burned last summer in a devastating wildfire. Driving and walking through the scorched land, where everything felt lifeless and silent, was surreal. A local in Jasper told us the cause of the fire is still uncertain, possibly a cigarette, lightning, or a mix of both. We spent three days there and hiked a couple of beautiful trails in the areas that hadn't burned. We saw squirrels, chipmunks, and even wild goats.

From Jasper, we took Highway 93 south toward Banff. That drive alone is worth the trip. The beauty is hard to put into words, so here’s a picture to help you imagine it.
​
Picture
The 93 road from Jasper to Banff.

We were lucky enough to spot a couple of American black bears along the way. It was amazing to see them roaming freely in their natural environment, minding their own business. In Banff, we stayed five days and visited the iconic Lake Louise and Moraine Lake. We also ventured into the nearby Yoho and Glacier National Parks. There, we saw elk and what I think was a marmot (but don’t quote me on that).

Picture
A black bear minding its own business.

We did several incredible hikes, always carrying a bell to make noise (so grizzly bears know you're around) and a can of bear spray, which is mandatory for many trails. Unfortunately, or maybe fortunately, we didn’t see any grizzlies. I guess the bell worked.
​
After immersing ourselves in the mountains, we headed to Vancouver and Vancouver Island. We had a great impression of Vancouver: good vibes, tasty food, pretty skylines, and hidden gems tucked around the city. To reach Vancouver Island, we took a car ferry that crosses over in a couple of hours. Tofino was the highlight, a little village full of personality.

Picture
Glamping domes and a fishery building in Tofino.

If I had to point out one downside to the trip, it would be the food. In Canada, decent quality food, what you'd consider average elsewhere, comes at a steep price. Many people resort to fast food a few times a week, and overall, Canada doesn’t really shine in the culinary department. Not that it was a surprise, considering some of the national staples are ketchup Lay’s chips and poutine (French fries with cheese curds and gravy).
​
So now it’s time to fly back to Sydney, slightly tired, definitely inspired, and maybe still craving real food. Canada surprised me in many ways, some good, some weird, and all worth it. I’m already looking forward to my next adventure (New Zealand in about 20 days!), but for now, I’ll just sit here with my Tim Hortons coffee and say: thanks, Canada. It was a wild ride.
​
0 Comments

Around meta-analysis (16): meta-data, metadata, and more meta confusion

28/6/2025

0 Comments

 
by Malgorzata (Losia) Lagisz

This post is inspired by Coralie’s recent blog post, “Meta-analysis terminology can be confusing”, in which she untangles a range of commonly used, misused, and confused terms in meta-analysis—such as subgroup analysis, moderator analysis, meta-regression, fixed-effect vs fixed-effects models, and multivariate.

These certainly warrant clarification. But what about the terminology for the underlying data—could that be just as confusing?

 
What is “meta-data”?

There are many definitions of meta-data (or metadata), but most describe it as “the information that defines and describes data” (ABS). Since information is also a form of data, meta-data itself can have meta-data… which can have more meta-data… and so on. Conversely, a dataset can include meta-data, which itself may include even deeper layers of meta-data. This creates a kind of conceptual circularity that adds to the confusion—especially in the context of meta-analysis (and systematic reviews of all sorts).
 
Does meta-analysis use meta-data?

Yes—but not always in the way people expect.

It is common to assume that meta-data simply refers to the dataset compiled and analyzed in a meta-analysis, especially since both terms contain the prefix “meta” and deal with data from primary studies. As a result, when researchers are asked to share both their data and meta-data, they often upload only the dataset itself. However, in this context, meta-data refers specifically to the description of the dataset: a detailed explanation of the variables, their definitions, units, data structure, etc. But this may also contain some information that can be considered meta-data, contributing to the confusion.

Picture

Visualising layers of meta-data in meta-analyses

What counts as “data” or “meta-data” depends on the context (see my diagram above). In a primary empirical study, the data might consist of field or lab measurements of things, humans, or systems, while the meta-data includes descriptions of the variables in that spreadsheet (black parts of the diagram above). 

But once a primary study is published (or shared), it gains another layer of meta-data: title, abstract, publication date, author names, affiliations, etc. This is the meta-data librarians and other information specialists work with (green parts of the diagram above).

In a secondary study, such as a meta-analysis or systematic review, you typically compile not only data of primary studies (selected results and their descriptors), but also some of their meta-data (e.g., study-level characteristics such as study reference, title, authors, journal, DOI, etc.), and then also generate new data for your synthesis (e.g., recalculated effect sizes). The resulting dataset is a layered mix of data and meta-data from different sources and levels.

What to do in practice

In practice, for a meta-analysis (and systematic reviews or other secondary studies) use terminology consistently in the context of your study: call your dataset "data", and description of your dataset "meta-data" (purple/plum, NOT pink, parts of the diagram above). You can still acknowledge that your data contains some meta-data from underlying primary studies (e.g. information describing the publications).
 
Why it matters

Conceptual complexity—and the commonly inconsistent use of terminology—may partially explain why appropriate meta-data is often missing or poorly documented in shared datasets from meta-analyses (and various types of systematic reviews). When people are asked to share meta-data--but they think this is just their dataset (data)--they only share the dataset, without description of all variables (meta-data). But without complete and well-structured meta-data (the descriptions of data), it becomes difficult to interpret the dataset (the data), let alone reuse it or reproduce the analyses. Transparent and clear meta-data (descriptions of data = dataset) is crucial for making meta-analyses truly open and reusable.
 
NOTE:
You can find earlier blog posts from my “Around meta-analysis” series archived on my personal website.
0 Comments

Meta-analysis terminology can be confusing

17/5/2025

0 Comments

 
by Coralie Williams
​
Picture
Image credit: patpitchaya (iStock)
I doubt I am the only one who has felt lost at times with meta-analysis terminology. Early on, I even struggled to understand what effect size referred to. I thought it meant the strength of a relationship in a model. It does, but in meta-analysis effect sizes are also the outcome data we analyse. So, the same term can refer both to the estimated effect (the regression coefficient) and the data we are modelling, depending on the context. I started writing the following points down out of frustration and to keep track for myself when reading the meta-analytic literature.

Subgroup analysis, moderator analysis, meta-regression
Sometimes we come across terms that sound different but actually mean similar things. Moderator analysis is a broad term for any method that looks at whether moderators (also called predictors, or independent variables) help explain differences in the effect sizes being analysed. One type of such methods is subgroup analysis, where studies are grouped based on a categorical variable and effect sizes are compared across these groups (e.g.  treated vs control). This method is useful to answer many questions, but it is limited to categorical variables. Meta-regression takes things a step further by using a regression model to look at how one or more moderators are linked to variation in effect sizes. These moderators can be categorical, continuous, or both. So, subgroup analysis is really just a simpler case of meta-regression, and they are both types of moderator analysis used in meta-analysis.

Fixed-effect vs fixed-effects models
Fixed-effect (singular noun) vs fixed-effects (plural noun) are sometimes used interchangeably in the literature to describe meta-analysis models, despite referring to different statistical assumptions (Borenstein et al., 2010; Viechtbauer, 2010). The fixed-effect (singular) model assumes that all studies in the meta-analysis estimate a common true effect size. Whereas, the fixed-effects (plural) model assumes that each study has its own true effect, but these are treated as fixed quantities and not drawn from a distribution. This makes it suitable for cases where we believe heterogeneity exists but want to restrict inference to the studies at hand (which is actually quite rare). In statistical modelling, "fixed effect" usually refers to a non-random coefficient in a regression model, for example species traits. But in meta-analysis, the label “fixed effect” can refer to a model. Confusing? Yep. And that's why some recommend renaming the fixed-effect model to the common-effect or equal-effects model for clarity.

Multivariate in meta-analysis
Another tricky term is multivariate. In a statistical sense multivariate refers to multiple response (outcome) variables. However, in meta-analytical modelling, this term can have several meanings. I recommended reading a great post by James Pustejovsky here who elaborates on this (with some humour) and explains its various meanings which has help me a lot to understand this term in the context of meta-analysis methodology.
These kinds of nuances in terminology can make it hard to get a clear conceptual footing, especially when new to the field, but hopefully it doesn’t scare you away from the wonderful (really it is) world of meta-analysis!
 
References
  • Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2010). A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 1(2), 97–111. https://doi.org/10.1002/jrsm.12
  • Viechtbauer, W. (2010). Conducting Meta-Analyses in R with the metafor Package. Journal of Statistical Software, 36(3). https://doi.org/10.18637/jss.v036.i03
0 Comments

A True Canadian Experience: Let’s Go Oilers!

16/4/2025

0 Comments

 
by Shinichi Nakagawa

When I was in Sydney this February, my old colleague Rob Brooks (from UNSW) told me I should go and see an NHL game, specifically an Oilers game, so finally, I did! I went with Totoro (my older son), and it was our first time at Rogers Place. It was such an exciting night.

Before the game, my new colleague Kim Mathot kindly lent us two Oilers T-shirts. They were very orange and very cool; we looked like real fans. The whole place was full of people wearing Oilers fancy jerseys, shouting and cheering even before and throughout the game.

The game was close, 2-2 until the third period. Then suddenly, the Oilers scored again and again! Every goal made us elate and the stadium shake. Everyone was yelling, jumping, clapping. There was so much energy, and the final score was 4–2.

It was very Canadian, at least that is what I thought, with cold drinks, loud music, and parachuting pizzas. Maybe not always polite, yet very polite.

I still don’t understand the rules very well (icing?), but it was a lot of fun. Totoro and I had a great time and bonded via Oilers, which I first thought was “Eulers” after the famous mathematician Euler [pronounced: oy-lr]!?!? – how wrong I was!

(photos by Shinichi Nakagawa)


0 Comments

Cross-country skiing

2/4/2025

0 Comments

 
by Ayumi Mizuno

I am not a fan of exercise or sports. If you know me, this probably does not come as a surprise. I do enjoy walking and stretching using an exercise ball, but the only sport I have ever willingly tried is bouldering - and that is about it. Throughout my life, I have done my best to avoid anything involving physical activity - gym class, sports festivals, and any other sports-related events.

But after coming to Edmonton, I ran into something I could not escape: cross-country skiing.

Before moving here, I spent six years in Hokkaido, Japan - a place famous for its heavy powder snow, where people from all over the country and abroad come to ski. And yet, I never once tried skiing. Not even once. Even when invited, I always found a way to politely say no.

So, when Losia first invited me to go cross-country skiing, I seriously regretted never giving it a shot back then. Even trying it once would have helped.

My first time? Honestly, I spent the whole time thinking, Why am I doing this? And afterward, I thought, once is more than enough. Then, the second time came. I was told we were going for a “walk in the snow,” but when I showed up - surprise! - it was cross-country skiing again. And somehow, we ended up on an advanced trail. I wanted to cry.

But the third time... it finally felt fun. The endless snowy fields stretching out before me, the quiet, the fresh air - it was actually peaceful. For the first time, I did not fall even once. I owe a lot to Losia, who patiently and kindly taught me, even when I struggled!

Now, I think I might actually enjoy cross-country skiing. Part of me even thinks I might actually choose to go again next time.

Picture
0 Comments

Mardi Gras Parade 2025

1/3/2025

0 Comments

 
by Malgorzata (Losia) Lagisz

I've been living in Sydney for ten years now, yet somehow, I never made it to the famous Mardi Gras Parade—even though it happens  every year just a few kilometers from the UNSW campus. With this being my final year in Sydney (well, technically, just half a year), I realised it was my last chance to experience it before moving to Canada.

My younger son decided to join me (how had he never heard about the Mardi Gras before?). We easily found a free viewing spot near the end of the parade route in Moore Park. Since I was still recovering from a fever, we stayed for less than an hour, but it was enough to see nearly 50 parade floats and groups—and to soak up the incredible atmosphere.

The event was colorful, energetic, and wonderfully diverse. The crowd included everyone from babies in strollers to centenarians on mobility scooters, with people of all backgrounds, body shapes, and abilities. The floats were just as varied, featuring everything from the Childless Cat Ladies to the City Mayor, with a strong presence of Aboriginal and Torres Strait Islander people leading the march. Costumes ranged from minimalistic to absolutely extravagant, and no matter how people dressed, the energy was infectious—everyone was having a blast.

It was a cool event to witness—so joyful and uplifting. Unfortunately, I didn’t have a great camera, and my phone isn’t the best for night photography, but I still managed to capture a few shots worth sharing.

0 Comments

Model checking in meta-analysis

31/1/2025

0 Comments

 
by Yefeng Yang

​Today, my topic is about what we often overlook in meta-analytic practice—things that could make a big difference to the reliability of our results.

Meta-analysis is everywhere now. With the rise of user-friendly statistical software, conducting one has never been easier. This accessibility is a double-edged sword. On the bright side, researchers with little to no statistical background can run meta-analyses and produce what is often considered more reliable evidence than any single study (thanks to increased statistical power). But on the flip side, it has become so accessible that many forget the statistical complexity behind it.

I’ve seen many researchers grab example code from somewhere (yes, open science – the soul of our lab!) and tweak it for their own data without really thinking about whether the approach they’re borrowing or the way they’re interpreting their results is actually valid. The problem? This can lead to misleading evidence, which, when applied to conservation, health, or policy-relevant topics, may have serious real-world consequences. So today, I want to highlight two critical but often ignored steps in meta-analysis:

1. Checking model assumptions
2. Assessing model fit
 
Are your assumptions holding up?
Every statistical model relies on assumptions, and meta-analysis models are no exception. But let’s be honest—how often do we actually check them? For example, most meta-analysis models assume that effect size estimates come from normal sampling distributions (note: this doesn’t mean the effect sizes themselves have to be normally distributed). Yet, in practice, few people ever check this assumption. It’s easy to do—just simulate the sampling distribution of the effect size you have chosen, plot a histogram, or use a normal quantile-quantile (Q-Q) plot to see if things look off:
Picture
Source: https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot
Another assumption that gets ignored is that effect size estimates are unbiased estimates of the true effect. This one might surprise you: the commonly used log-response ratio log(X₁/X₂) is actually a biased estimator because of Jensen’s inequality. There’s a simple bias-correction factor based on a Taylor expansion, but hardly anyone applies it.
 
Is your model a good fit for your data?
When you fit a meta-analysis model, you are making an implicit assumption that the model accurately represents the data-generating process. But in reality, the true process is almost always more complex than any model one can come up with. This is why checking model fit is essential, yet it’s something we rarely do.
 
One simple way to test for model misspecification is by looking at standardized (deleted) residuals—if they’re not randomly scattered, that’s a red flag. Similarly, we often assume that true effects vary within and across studies (heterogeneity), but how many of us actually test this using, for example, a Q-test (not that this is different the often reported heterogeneity index I2)? We also assume that both within- and between-study random effects follow a normal distribution, yet we almost never run statistical tests to confirm this.
 
And when we use the restricted maximum likelihood (REML) method, we assume it has successfully found the optimal parameter estimates. But without checking the likelihood profile, how do we know if it actually did? Most of us don’t bother to check—and that’s a problem.
 
So, what we should do?
My answer: don’t assume—verify! Some might say I’m overthinking this. Sure, some assumption violations—like non-normality—may not always impact results that much, according to simulations. But here’s the thing: you never know if that’s true for your specific dataset. The best way forward is to check. If you find assumption violations or model misspecifications, be transparent—report them, and interpret your results with caution. That said, I understand that checking every assumption and model fit metric manually can be tedious. This is where methodologists and software developers could step in—by creating pipelines that automate these essential checks.
 
Meta-analyses shape scientific understanding, policy, and real-world decisions. If we want them to provide truly reliable evidence, we need to stop mechanically clicking and pointing in a GUI software or running a couple of lines of R code without critical thinking. Because in the end, a meta-analysis is only as good as the care put into it.
0 Comments
<<Previous

    Author

    Posts are written by our group members and guests.

    Archives

    October 2025
    September 2025
    August 2025
    July 2025
    June 2025
    May 2025
    April 2025
    March 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    August 2020
    July 2020
    June 2020
    April 2020
    December 2019
    November 2019
    October 2019
    September 2019
    June 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    March 2018
    January 2018
    December 2017
    October 2017
    September 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    January 2017
    October 2016
    August 2016
    July 2016
    June 2016
    May 2016
    March 2016

    Categories

    All

    RSS Feed

HOME
PEOPLE
RESEARCH
PUBLICATIONS
OPEN SCIENCE
OPPORTUNITIES
LINKS
BLOG

Created by Losia Lagisz, last modified on June 24, 2015