I-DEEL: Inter-Disciplinary Ecology and Evolution Lab
  • Home
  • People
  • Research
  • Publications
  • Blog
  • Open Science
    • Registrations
    • Registered Reports
    • Published Protocols
    • Preprints
    • EDI
    • Other
  • Opportunities
  • Links

Split reference list helper for pilot and collaborative screening rounds

30/11/2022

0 Comments

 
by Coralie Williams

When screening for a systematic review or meta-analysis, we conduct several pilot screening rounds. Pilot screenings help us refine our search string, decision tree, and increase the overall accuracy of our screening for literature reviews [check out this nice guide from the I-DEEL team for more info: Foo et al, 2021].

During a pilot screening, we want to select a random subset of references that would be a representative sample of the full set. When possible, screening rounds are conducted in collaboration with another reviewer. To speed up the screening process, we sometimes want to randomly allocate a subset of papers to a collaborator by splitting a reference list into subsets.

There are two reasons we’d want to automate the selection and splitting of a reference list:
  1. It is time consuming to randomly select papers (>100 papers is tedious to select by hand!)
  2. We are not really good at selecting things at random (actually computers aren’t really good at selecting truly at random either*)

​Below is the R (www.r-project.org) code to run two functions that may come in useful when conducting your pilot and collaborative screenings with Rayyan (https://rayyan.ai/), or any other software where you can upload your pilot reference list.

1. Select random pilot set:

First, load the getpilotref function below in your environment:
​
# -----------------------------------
# getpilotref function 
# -----------------------------------
## Description: 
#     Function to obtain a random subset of references for pilot screening.
#
# Arguments
# - x: data frame with reference list
# - n: number of papers for pilot subset (default is 10)
# - write: logical argument whether to save the pilot list as a csv file 
#   in the current working directory (default is FALSE).
# - fileName: name of file (default is "pilot")

getpilotref <- function(x, n=10, write=FALSE, fileName="pilot"){
  
  if (length(n) == 1L && n%%1==0 && n>0 && n<=nrow(x)) { 
    
    # sample randomly the vector n of row indexes and remove id column in the final dataset
    x$ids <- 1:nrow(x)
    pdat <- x[which(x$ids %in% sample(x$ids, n)),]
    pilot <- pdat[,-which(colnames(pdat)=="ids")]
    
    } else {
      # error message n value provided is not valid 
      stop("Incompatible value n supplied, please check. 
      #n must be a positive integer no higher than the total number of references provided.") 
    }
  
  if (write==T){
    
    # save generated pilot list in working directory using the name provided
    write_csv(pilot, paste(fileName, ".csv", sep=""), na="")
    
    # print out summary of saved file name
    cat(paste("Pilot random sample set of ", n, " articles is saved as: ", fileName, ".csv", sep=""))
    
  }
  
  return(pilot)
}
​Let’s try it out
Load example csv file that was exported from Rayyan (a reference list of papers in Ecology & Evolutionary Biology having the word “butterflies” in their title):

# Read example butterfly reference list
articles<-read.csv("https://raw.githubusercontent.com/coraliewilliams/2022/main/data/articles_butterfly.csv")
​First, let’s obtain a random set of 10 papers without saving it as a csv file:
p10 <- getpilotref(articles)
Now, let’s obtain a subset of 100 papers for a pilot screening and save the subset as a csv file called pilot100.csv. Make sure you have the readr package installed and loaded in your environment.
library(readr)
p100 <- getpilotref(articles, n=100, write=T, fileName="pilot100")
## Pilot random sample set of 100 articles is saved as: pilot100.csv
This will save a csv file pilot100.csv in your working directory. If you are unsure where is your working directory run this command getwd() in your console.
2. Split reference list with another collaborator​

Load the splitref_prop function in your environment:
# -----------------------------------
# splitref_prop function 
# -----------------------------------
## Description: 
#     Function to split in two a reference list based on input proportions.
#
## Arguments: 
# - x: data frame with reference list
# - p: vector of two numerical proportions for each split, it must have two positive numerical values that sum to 1.
# - write: logical argument whether to save the pilot list as csv in current working directory.
# - fileName: name to give to the suffix of the two split csv files.

splitref_prop <- function(x, p=c(0.5, 0.5), write=F, sname="split") {
  
    if (length(p) == 2L && is.numeric(p) && sum(p) == 1 && all(p > 0)) {
      
      # randomly allocated a numerical id to each reference
      rids <- sample(1:nrow(x))
      
      # get index of row to split on using the proportion values provided
      spl <- floor(p[-length(p)] * nrow(x))
      
      # get indices of two data frames based on split ids
      indx1 <- rids[1:spl]
      indx2 <- rids[(spl + 1):nrow(x)]
      
      # save split subsets in two separate datasets
      split1 <<- x[indx1,]
      split2 <<- x[indx2,]
      
      # print out summary message
      cat(paste(c("Reference list was randomly split into",length(p), "proportions of", p[1]*100, "% and", p[2]*100, "%")))
      
      if (write == T) {
        # save files
        write_csv(split1, paste(sname, "_set1", ".csv", sep = ""), na ="")
        write_csv(split2, paste(sname, "_set2", ".csv", sep = ""), na ="")
        }
      
      } else {
      # error message if provided n value is not valid
      stop("Incompatible values for p (proportions) supplied, please check.
           Proportion values must be positive integers less than 1, and the total sum of all proportions should equal to 1.")
        
    }
}
​Let’s try it out
Using the example butterfly reference list, let’s first split the reference list in two equal splits (50% each):
splitref_prop(articles)
## Reference list was randomly split into 2 proportions of 50 % and 50 %
This will give you two separate data frames to share between two reviewers: split1 and split2.

Now let’s get 30% of references in the first subset (split1) and 70% in the second subset (split2), for example if one reviewer has more time to spend on the screening:
splitref_prop(articles, p=c(0.3,0.7))
## Reference list was randomly split into 2 proportions of 30 % and 70 %
Let’s save the 30% and 70% split list of references as csv files with the suffix “testsplit”:
splitref_prop(articles, p=c(0.3,0.7), write=T, sname="testsplit")
## Reference list was randomly split into 2 proportions of 30 % and 70 %
This will save two csv files, testsplit_set1.csv and testsplit_set2.csv, in your working directory.
Picture
*computers aren’t really good at selecting truly at random…Random number generators from most computer programs are actually “pseudo-random”, meaning they are produced from a deterministic mathematical model or algorithm. The R code above uses a pseudo-random number generator. Pseudo-random number generators are usually good enough for their intended purpose (basically better than what any human could do). A good pseudo-random number generator will reproduce statistics that are consistent with true randomness, but they are not truly random. A truly random number can be generated based on a constantly changing physical process that can’t be modelled as an algorithm. If you’re curious about true randomness check out these websites: https://www.random.org/; https://qrng.anu.edu.au/random-colours/.


(Any comments, questions or feedback, you can reach me at: [email protected])
0 Comments

Say goodbye to fixed- and random-effects meta-analyses

27/10/2022

0 Comments

 
By Yefeng Yang

As I have been doing more surveys on meta-analytic practices in many disciplines and re-analysing more published meta-analysis (MA) papers, there is one “recommendation” that is growing stronger and stronger in my brain. That is, we should say goodbye to traditional fixed- and random-effects MAs and conduct our MAs using advanced methods like multilevel and multivariate models because meta-analytic datasets are often multilevel and multivariate in nature. Doing so can make sure you properly handle statistical issues like dependency, and heteroscedasticity, resulting in more robust parameter estimations and inferences. My main argument is that in the “worst-case” scenario, where your dataset does not have a complex structure thereof, these advanced models will automatically reduce into a normal fixed- and random-effects models, all with similar (or identical) results to those expected. More importantly, applying advanced methods can help you decompose variances (Figure 1) and separate correlations of true effects from observed effects (Figure 2), delivering new biological insights. I can see the between-study heterogeneity and correlation are overestimated in many published meta-analyses using fixed-and random-effects models.
Picture
Figure 1. Imaginary example of hierarchical data structure.
Although these advanced methods are good, there are (at least) three remarks worth noting here. First, all your models should be built strictly based on predefined questions (e.g., a priori hypotheses). Second, before applying these models, you need to correctly understand the statistical theory behind them. Otherwise, you very likely disseminate misleading information if you published results from them. Third (but not the last), do not use complex models to fit a small-sample-size dataset. This is especially true for multivariate models, which are often heavily parameterized (even overparameterized). So, always do (at least some basic) model checking (e.g., likelihood profile, convergence) to ensure stability of your model fitting.
Picture
Figure 2. Joint probability distribution (bivariate normal joint density). Photo source: Multivariate normal distribution. (2022, October 16). In Wikipedia. https://en.wikipedia.org/wiki/Multivariate_normal_distribution
​As I have been knowing more about statistics, I realised that many methods are just a special form of a more general framework. For example, (two-sample) Student t-test is a special form of ANOVA, which is a special form of linear regression, which is a special form of generalized linear model or linear mixed model, which is a special form of generalized linear mixed model, which is a special form of the generalized additive mixed model. In the same vein, fixed-effect MA is a special form of random-effects MA, which is a special form of a multilevel or multivariate model. I can imagine that one might disagree with “say goodbye to fixed- and random-effects meta-analyses”. For example, fixed-effects MA can still provide valid inferences if limiting your results to the included studies (e.g., conditional inference). I acknowledge this is true as long as you are not goanna generalize results beyond the included studies. I know asking people to resort to complex methods is difficult because people like easily-understandable tools - just think about P-value. I am always open and happy to see different ideas. Lastly, all the above claims only represent my personal intuition and opinion (I might extend it into a paper in future). They might be wrong and do not necessarily speak for my lab’s attitudes toward meta-analyses.
0 Comments

Attending an overseas conference – Ecological Society of America 2022

28/9/2022

0 Comments

 
by Samantha Burke

After over two years of lockdown, I had the opportunity to leave Australia to attend the Ecological Society of America’s (ESA) joint conference with the Canadian Society for Ecology and Evolution (CSEE). This conference marked my first time presenting an oral talk outside of UNSW. While it was exciting to share my research with others, I found learning about others’ research and networking with new people to be an equally exciting experience.

As my projects consist of systematic-like research, I was thrilled to see ESA created an entire session dedicated to meta-analysis in ecology. Ecologists are relatively new to conducting meta-analysis of their data, so this session was well-attended and directed conversation towards improving meta-science while it’s still in its early stages in ecology. These talks were all excellent and highlighted the upcoming importance and challenges of conducting systematic-like research in ecology and evolutionary biology.

In addition to meeting new people, I was able to connect with researchers I already knew. While in Montreal, I was able to meet I-DEEL’s newest post-doc, April Martinig, in person. April has been working remotely for the past few months, so it was great to attend her presentation on her previous work examining predator-prey interactions in culvert animal passages. As a Canadian citizen, she knew of the best places to go in Montreal, and we chatted over a delicious vegan lunch. We should all look forward to the research she’ll conduct with I-DEEL.

I also had the opportunity to meet members of the Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE), of which I’m a member. Even though I went to Canada intending to attend the ESA conference, SORTEE members attending the conference gathered for a mini meetup in Montreal. The society was able to reach out to more ecologists at the conference, and many people came to the meetup to hear firsthand what SORTEE is all about. If you’re interested, please check out a previous blog post by Rose O’Dea and the SORTEE website.

Attending a conference was such a privilege, especially one as diverse as ESA’s 2022 Conference. I look forward to continuing to share my work and learn from others.
​
Picture
SORTEE meetup at the ESA conference. Photo Credit: Dominique Roche
0 Comments

I-deel at ESEB2022 congress

24/8/2022

0 Comments

 
By Losia Lagisz
13 - 19 August 2022 has been a very busy and fun week – a week at ESEB (European Society for Evolutionary Biology) Congress in Prague, Czech Republic.

This congress was very special to us for five reasons:
  1. We had four I-deel members attending (Shinichi, Losia, Szymek and Patrice) and one associated member (Totoro). Unfortunately, somehow, we do not have a photo with all of us together!​
  2. For Shinichi and Losia it was first in-person conference in three years, also first overseas travel since the start of the Covid pandemics. For Totoro it was his first conference ever (and he did very well with his poster presentation).
  3. There were hundreds of great presentations and posters – physically impossible to see them all. The diversity of topics and ideas was exciting and inspiring, as usual at ESEB.
  4. We got to catch up with many of our good old friends and collaborators. We also met many interesting new people.
  5. We organised a SORTEE in-person meet-up, with over 20 people attending from around the world. Some new members potentially will be joining SORTEE and their forces for credibility revolution in ecology and evolution!
Picture
Picture
Big thanks to the organisers of ESEB2022 and we hope to be able to attend the next one – ESEB2025 to be held in Barcelona, Spain!
0 Comments

Ireland’s Ancient Wild Side: There was once more than just whiskey and stout

17/7/2022

0 Comments

 
by Kyle Morrison
​
In today’s world Ireland is famous for vibrant cities, cosy pubs and cold Guinness but, in a simpler time – before us humans got involved, it was once the land of giant deer, grey wolves and grizzly bears. Although, some of these animals can be seen elsewhere, a few sadly cannot and were never seen again. Here I provide you five of the coolest animals that ever roamed the Emerald Isle.  
Number 1 – The Great Auk
Despite being coined the original penguin, great auks were not actually penguins at all but a fine product of convergent evolution. Ironically, the Latin name for the great auk is Pinguinus impennis, and when European explorers found the first penguins in the southern hemisphere, they noticed their uncanny resemblance to the great auk and hence we have modern name for penguins. The great auk had a white belly and a black back, stood around 85cm tall and weighed around 5 kgs. It had small wings for swimming and a large beak for eating fish and krill. The great auk was once a common sight along the Irish coastlines with remains being found in popular tourist spots in Donegal and Galway. Much like penguins, the great auk was utterly defenceless on land which unfortunately contributed to their eventual demise in the 1840’s due to widespread hunting for food and bait.
Picture
Picture
​Number 2 – The Irish lynx
Currently, lynxes (or, bobcats) are mainly found across Siberia and North America, but these majestic wildcats were once widespread across the island of Ireland. The presence of Lynx in Ireland wasn’t known until the late 1930’s, when a few hikers found a mandible bone in County Waterford. It’s likely that the Irish lynx roamed the woods and countryside, preying on small deer and hare. Lynxes are known to have survived in the British Isles until the Romans arrived however, there is no indication of when they went extinct in Ireland. Recently, lynx have been considered for a reintroduction project, helping to balance woodland ecosystems and increase biodiversity. The aim is that the introduction of the native lynx will reduce numbers of invasive sika deer, which currently have no natural predator.
​Number 3 – The Irish Wolf
Wolves were a major part of the postglacial fauna in Ireland dating back as long ago as 34,000 BC. The Irish word for wolf is Mac Tíre which means “Son of the Countryside”, which illustrates how important wolves were to the people of Ireland. In fact, many Irish stories, myths and folklore are about wolves and how the Irish gods adored them. Before the great agricultural revolution on the island most of the countryside was clothed with thick forest, which was perfect hunting habitat for wolves. It wasn’t until the arrival of Oliver Cromwell in 1650’s that wolves in Ireland became troubled. Cromwell wanted rid of the wolves in Ireland and shockingly ordered a mass culling of all wolves offering £5 for a male, £6 for a female and 40 shillings for a cub. Unfortunately, the number of wolves began to plummet and the last wolf in Ireland was killed in 1786 in County Carlow. Today, there are only a few reminders of the existence of wolves in Ireland through ring forts that were once used to protect sheep, place names and the great Irish Wolfhound.
Picture
Picture
Picture
​Number 4 - Irish Bear
For thousands of years brown bears roamed Ireland, preying on deer and fishing in streams for salmon. Much like modern bears in North America, Irish bears hibernated in caves over the long winter months. Amazingly, scientists have revealed DNA evidence that suggests that the Irish bear is the maternal ancestor of Polar bear, which conflicted the previous opinion that North American bears were the ancestors. Additionally, it is thought that the two species may have mated opportunistically during the last 100,000 years which means that they must have interacted during the last ice age. Unfortunately, the Irish brown bear went extinct around 2,500 years ago mostly due to great deforestation and hunting in Ireland. There is a famous Irish myth about a sleeping bear god who will rise from hibernation and come to the aid of their people when called. The summoners of the bear god were called the Mahon’s, the son of the bears. Ironically, the Mahons later became the McMahons which is now a common surname around the world. Today, all that remains to remember the Irish bears are a few sculptures and a Guinness poster.
​Number 5 – The Irish Elk
Megaloceros giganteus, the Irish elk, is one of the largest deer that ever lived. It stood at seven feet tall at the shoulder and its antlers spanned an impressive 12 feet wide. Their enormous antlers are thought to be due to sexual selection, a trait to impress females. It had long been thought that their antlers were purely for display but recently scientist have indicated that they may have also been for contests. At their largest males weighed a massive 1,500 lbs, roughly the size of the modern Alaskan moose. Strangely, the Irish elk is not an elk at all but a deer, the name was coined due to its sheer size and the original excavators believed they found the remains of an extinct species of elk.  The Irish elk was not exclusive to Ireland but was named so due to their most famous and well-preserved fossils were found in peat bogs across the island. Although impressive, their wide antlers became a maladaptation, and contributed to their eventual extinction in 7,700 BC.
Picture

Image sources:
  • Auk image: https://www.bbc.com/news/science-environment-50563953
  • Irish Lynx Image: https://www.breakingnews.ie/lifestyle/four-amazing-animals-that-could-be-reintroduced-to-ireland-1019270.html
  • Ring fort image : https://www.amazing-grace.ie/an-grianan-of-aileach
  • Irish wolves’ mythology Image: https://earthandstarryheaven.com/2015/05/13/irish-werewolves/
  • Guinness Bear Image : https://commons.wikimedia.org/wiki/File:Guinness_StoreHouse,_Dublin._Advertising_Exhibit._-_geograph.org.uk_-_626611.jpg
  • Elk Image: http://news.bbc.co.uk/2/hi/uk_news/northern_ireland/8316262.stm

0 Comments

What a beautiful hypothesis! It explains a lot!

27/6/2022

0 Comments

 
by Lorenzo Ricolfi
​

The Italian version of Charles Darwin's The Origin of Species opens with a preface by Luca and Francesco Cavalli-Sforza. They are two of the four children of Luigi Luca Cavalli-Sforza, an Italian geneticist, academic, researcher, and professor emeritus at Stanford University in California, who died in 2018 and became known for his research activities in population genetics. He was also involved in anthropology and history in his studies of human migration.
Picture
From https://www.nytimes.com/2018/09/19/obituaries/luigi-cavalli-sforza-dies.html
Since I could not find the English version of the preface anywhere, I would like to translate and summarize it in this article. Therefore, the following text is a summary and translation of the preface written by Luca and Francesco Cavalli-Sforza.

Translation: "It is said that when Laplace, the great French astronomer, presented Napoleon with a copy of his Celestial Mechanics, in which he described universal gravitation and advanced hypotheses on the formation of the solar system, Napoleon remarked: "Mr Laplace, they tell me that you have written this big book on the design of the universe, without ever mentioning its Creator". "This is a hypothesis I did not need", replied Laplace. When Napoleon, amused, reported this conviction to the mathematician Lagrange, he exclaimed: "What a beautiful hypothesis! It explains a lot! ". Two hundred years later, modern texts on astronomy continue to describe the behaviour of celestial bodies without the need for a God creator. In science, no unnecessary hypotheses are introduced to explain events. While no one nowadays argues about divine intervention in the history of the cosmos, a similar question resurfaces from time to time in biology. Since Darwin's time, the theory of evolution has made enormous progress and can explain a great deal of the history of life. Today, our relationship with primates is no longer in question. It has been proven beyond any reasonable doubt.
Picture
From Stutz, 2014: https://www.researchgate.net/publication/264417273_Embodied_Niche_Construction_in_the_Hominin_Lineage_Semiotic_Structure_and_Sustained_Attention_in_Human_Embodied_Cognition/citations
Nevertheless, it still meets with the most vigorous resistance from the ultra-conservative fringes of Baptist Christians (a powerful political force in the south of the United States) and ultra-orthodox Jews. On the other hand, it does not seem to create any difficulties for either Catholicism or Islam. What is questioned today is whether evolution is sufficient to explain the extraordinary complexity of life: how is it possible that living beings have developed such a variety of forms? How can an organ such as the eye have achieved its extreme complexity only under natural forces?
Picture
From: https://www.phos.co.uk/journal/the-evolution-of-sight
​Someone says there must be an Intelligent Design guiding the history of life, intervening in the mechanisms of evolution (with a view to some goal, it is assumed, but this is not stated). The Intelligent Design movement was born as a political fact in the United States; it is promoted by foundations financed by ultra-conservative billionaires and engaged in specific activities, such as supporting those who sue state schools to have the biblical account of creation taught alongside the theory of evolution as an equal alternative. The extreme right-wing label with which the movement was born does not help its spread in Europe, where there has been enough ideology. The absence of scientific arguments makes it fiddly to counter directly. An organism can only live if it interacts with its living environment to obtain food and can only pass on its DNA to the next generation if it becomes an adult and reproduces. However, the environment is constantly changing. Only those who remain 'adapted' to their environment can continue to live. Natural selection acts by automatically filtering, like a rigid sieve, the best types to survive and reproduce, environment by environment and circumstance by circumstance.
Picture
From: https://www.britannica.com/science/evolution-scientific-theory/Adaptive-radiation
The theory of evolution by mutation and natural selection says precisely this: living species evolve under the impetus of chance and necessity. Darwin's theory of evolution provides an excellent key to interpreting what we see around us and deepening our knowledge of the molecules that make life possible." - End of translation.
​
Science and religion have always had harsh disagreements about explaining the existence of the observable universe from the earliest known periods through its subsequent large-scale evolution (of both abiotic and biotic factors). My opinion is that science should not be concerned with the beliefs of others if the views of others do not limit science. But, at the same time, religions should help scientists find the right path following moral rules and ethics. Both science and religion are great powers that give humankind its singularity. Therefore, they should work together to make our species more just, educated and happy.
0 Comments

Farewell and welcome

31/5/2022

0 Comments

 
by Shinichi
Picture
Last week, the I-DEEL lab gathered to have a farewell party for Cat who worked on the "PFAS project" for the last 2 years. This project is our lab’s first research synthesis project in environmental sciences, and Cat played a major role. Now she is in Europe and travelling around the world for the next several months (detoxifying PFAS, I presume).

We also welcomed 4 new PhD students to our lab: Lorenzo, Kyle, Coralie and Jess. Lorenzo will further synthesize the PFAS literature while Kyle will work on the pesticide pollution literature. Coralie will develop new meta-analytic tools, working with Prof David Warton. Jess, who did Honours degree with us already, will apply deep learning methods to Australian wildlife image data, working with Prof Richard Kingsford, people from Taronga Zoo, and NSW Wildlife and National Parks.

This is going to be a huge variety of research work - just like the food on the table (see picture above - this is a potluck party where everybody brings a dish!). As they say: “Variety is the spice of life”.
​I am very much looking forward to what the future will bring to I-DEEL!
0 Comments

Vegan food guide - Sydney edition

30/4/2022

0 Comments

 
By Patrice Pottier

​Being vegan for nearly five years, I have noticed drastic changes in the accessibility and fanciness of vegan food. The days when people thought vegan food only consist of salads and seeds are far behind! Plant-based foods can take all shapes and forms, and I guarantee you that you may not be able to tell some meals are vegan in a blind taste.

Let me introduce you to 10 vegan restaurants you must try in Sydney. Forget the old dry veggie patty - I guarantee you won’t be skeptical about vegan food after trying those places.
1.   I Should be Souvlaki
​
Fan of mock meats, garlicky sauces and delicious wraps? Souvlaki got you covered! I swear even the hardest “carnivores” will succumb to the flavours of the signature Souvlaki.
​Recommendation: Mix (Soy-based “lamb” and “chicken”) Signature Souvlaki.
Address: 399 King St, Newtown
​
Picture
​2.   Golden Lotus
Probably the best Vietnamese vegan restaurant in Sydney. Impressive variety of dishes, great service, and delicious.
Recommendation: Laksa, chef’s recommendations.
Address: 341 King St, Newtown
​
Picture
3.   Yulli’s
While Yulli’s is mostly known for their brewery, they also make delicious food!
The dishes are delicate, beautifully presented, and flavourful (especially with a freshly brewed beer).
Recommendation: San Choy Bow, Pizza
Address: 417 Crown St, Surry Hills
​
Picture
4.   Shift Eatery
Who doesn’t like toasties? These ones taste like no other! The whole staff is vegan, and the food is fresh and flavourful.
Recommendation: The Reuben’s vegan brother, Steve
Address: 2/241 Commonwealth St, Surry Hills
​
Picture
5.   Nutie
Craving sweets? Nutie has the best treats around the area.
Recommendation: Strawberry cheesecake, Donuts
Address: 44 Holt St, Surry Hills
​
Picture
6.   Lonely Mouth
Have you tried RaRa ramen and loved it? Well, folks from RaRa made a fully-vegan version of their restaurant – Lonely Mouth. There are not many options in the menu, but even to fulfill your stomach!
Recommendation: TanTanmen, Sunflower & Hempseed Shoyu
Address: 275 Australia St, Newtown
​
Picture
7.   Gigi Pizzeria
Authentic Napoletana-style, woodfired pizza. I don’t think I need to say anything else!
Recommendation: Calzone con Melanzane, Lasagna
Address: 379 King St, Newtown
​
Picture
8.   La Petite Fauxmagerie
Before you say that “Vegan cheese is boring” – try this place! Fetta, mozzarella, ricotta, blue, brie, halloumi – options are endless, and they will blow your mind!
Recommendation: I am not of big fan of cheese, so I haven’t tried this place myself. It is, however, highly recommended by cheese-lovers!
​Address: 412 King St, Newtown

Picture
9.   Oh My Days
Being French, I used to miss croissants and other pastries. Not anymore! Oh My Days has a great variety of pastries and they taste really authentic.
Recommendation: “Bacon” & “cheese” croissant, Almond croissant
Address: 99 Glebe Point Rd, Glebe

Picture
10.  Soul Burger
Soul Burger is probably the best option around campus if you are craving for burgers. Ingredients are fresh, and the burgers blook and taste amazing!
Recommendation: Sydney Sider, Southern Fried “chicken”
Address: 49 Perouse Rd, Randwick

Picture
This are, of course, only a short sample of the amazing range of options Sydney has to offer.  Want to find more vegan places? Check out HappyCow – an app that list vegetarian and vegan restaurants worldwide.
​
I hope you enjoy this culinary discovery! 😊
0 Comments

The endless fight for acclimatization

31/3/2022

0 Comments

 
by Lorenzo Ricolfi

"It is not the strongest of the species that survives, nor the most intelligent; it is the one most adaptable to change.”

​This quote is cool but it is often associated with Charles Darwin's Origin of Species when Charles never wrote that! Instead, it was formulated by Leon C. Megginson, Professor of Management and Marketing at Louisiana State University. (To read about this anecdote, click on this link).
Picture
Anyway, adaptation is a capacity that plays an essential role in evolutionary biology; it is a dynamic process that adapts organisms to their environment, improving their evolutionary fitness. Similarly, but on a different time scale, an individual's acclimatization capacity to a change in its environment enables it to maintain fitness across various environmental conditions. My name is Lorenzo Ricolfi, and, like anyone who has survived these two years of the pandemic, I struggle every day to acclimatize to change.
Picture
The COVID-19 pandemic has dramatically upset our habits and daily routines. Moreover, it has presented us with a tough challenge: to cope with dramatic and sudden changes. I lived my life in Italy, studying and working as a researcher at the University of Rome, until January 2020, when I took a plane to Brisbane. Study and work followed each other without a gap year, and I needed a breath of fresh air and an adventure before returning to Italy six months later. It was a good plan.

Well, it never came true. The World Health Organization declared the outbreak a Public Health Emergency of International Concern on the 30th of January 2020, 21 days after my landing in Australia. On the 31st of January, two Chinese tourists in Rome tested positive for the virus, and Italy was the first country in Europe to be affected by the pandemic. A month and a half later, the Italian army vehicles had to transport the dead out of the city of Bergamo as its crematorium struggled to cope. This disaster happened only a week after the World Health Organization officially declared COVID-19 a pandemic. That was the situation.

Australia at that time was in a bubble of its own, far removed from what was happening overseas. I was reading the news on the web, and it all seemed absurdly surreal. Virus? Wheezing and difficulty breathing? Social distancing? Masks? It was hard to assess and assimilate the news with reason and objectivity. And I had to take a decision now and immediately: return to my country or keep staying in Australia. How could I take such a decision lightly?

​There were many factors to consider and the implications too. Italy was in full lockdown, and although I was worried about my loved ones, I decided to stay, not knowing when I would return. I would return when the situation improved and the pandemic had passed. Days turned into weeks and weeks into months, and I began to need work. I held about ten different jobs in the time that followed. I worked as a dishwasher, a waiter, a kitchen hand, a warehouse worker, a driver, a delivery guy and a carpenter. I had never done any of these jobs before in my life. Months turned into years, and I realized that I wasn't coming back anytime soon. Australia closed its borders. There were no more planes in the sky.
Picture
Suddenly, my life was completely different, and the sense of nostalgia was strong. If I couldn't go home, I wanted at least to get back to what I was passionate about. I decided to take the English exam necessary to apply for a research project at the university level. I studied and passed the exam with flying colours.

Meanwhile, while surfing the websites of various Australian universities, I found an exciting laboratory at the University of New South Wales in Sydney. As luck would have it, the lab was looking for a PhD student with my background. I immediately got in touch and, after devising a research proposal that matched my interests and the knowledge and skills of the lab, I applied for a scholarship from the Australian government to cover the PhD.

It is now the beginning of April 2022, it has been two years and three months since I landed in Australia, and I started my PhD a couple of months ago. The pandemic situation has improved thanks to vaccines, although the pandemic is not over. And I have still never returned home. Over the last two years, the changes in my life have been massive, but I am thrilled with where they have taken me, even though they were unplanned and presented me with some callous times and challenges.
The point of all of this is that although we constantly try to categorize, order, and simplify reality, it is permeated by the chaos in which change is the engine. We need order and stillness in our environment and minds, but we cannot avoid change. Instead, we must learn to be flexible enough to shape ourselves without breaking or losing our identity. It is a challenging game based on compromise and sometimes on acceptance and letting go.

The pandemic has abruptly put reality before us, where not everything goes as planned. But it also reminded us of one thing: there is nothing wrong with that. Plans in life are necessary, but their implementation is not to achieve a state of happiness. Instead, an idea and a plan can evolve into something completely different. This turning point may initially be seen as a failure, a crack in the wall of our lives. However, it is only after time that we realize that the plan was but one of many steps, rather than the dividing line between success and failure.
0 Comments

Around meta-analysis (14): deduplicating bibliographic records

28/2/2022

0 Comments

 
by Losia Lagisz
 
Removing duplicated records can be cumbersome. When collating bibliographic records from   multiple literature databases both the total number of records and the proportion of duplicates can be high making manual removal of duplicates extremely time-consuming. Manual resolution of each set of potentially duplicated records is required when using reference managers such as Zotero or EndNote, and especially a screening platform Rayyan (note that deduplication algorithms available in all these are reasonably good at detecting (flagging) duplicating records (exact and non-exact duplicates), but not perfect, so combining different approaches is recommended anyway).
 
Here, I present an efficient workflow in which records from multiple sources (literature databases) are combined in Rayyan (https://rayyan.ai/), then automatically deduplicated using an R script (www.r-project.org), and finally uploaded into Rayyan again for the final round of deduplication and screening. Importantly, apart from Rayyan and R no other software is needed (but, at any stage, you can import/export lists of records into your reference manager to see the records or convert file formats). I assume you are already quite familiar with Rayyan and R.
​ 
The workflow:

​1. Gather the bibliographic files. 

​Download lists of bibliographic references (with abstracts) from databases used to run the literature searches. Most of the time, exporting thema as a .ris file would work best. Rayyan has guidelines for the most commonly used databases on its upload page (see the screenshot below).
Picture

2. Upload files into Rayan.
​
Create a new project in Rayyan and upload all files into it. This will create a combined list of records.

3. Run deduplication algorithm in Rayan (optional).

This will give you an idea on how many duplicated records you have in the combined set of records (if less <200 you may want to resolve them manually in Rayyan). To run the algorithm, press a “Detect duplicates” button close to the top right corner of the view with the list of combined references in Rayyan.
Picture
4. Export combined list of records from Rayan.

​This will create one .csv with all references in the same format. To export the records, press a “Export” button close to the top right corner of the view with the list of combined references in Rayyan. In the pop-up window select “All” and “CSV” format (you can include all the fields listed below these options). Note that Rayyan will send you a link via email to download a compressed file. After decompressing, rename the .csv file to something usable (e.g., "FILENAME.csv") and place it in your R project folder.
Picture
5. Upload combined .csv file into R.

Load the R packages needed:
 
library(tidyverse) # https://www.tidyverse.org/
library(synthesisr) # https://CRAN.R-project.org/package=synthesisr
library(revtools) # https://revtools.net/
dat <- read.csv("FILENAME.csv") #load the file
dim(dat) #see the initial number of uploaded references

 
6. Prepare data for deduplication in R.

We will deduplicate by comparing titles. Before doing so, it is good to tidy them up by bring them to the same case, removing extra white spaces and punctuation. We save these “processed” titles in a new column.
 
dat$title2 <- stringr::str_replace_all(dat$title,"[:punct:]","") %>% str_replace_all(.,"[ ]+", " ") %>% tolower() # Removing all punctuation and extra white spaces

 
7. Remove exact title matches in R.

This step uses processed titles to create a new smaller list of references with exact duplicates removed. It will save computational time for the next step (detection of non-exact duplicates).
 
dat2 <- distinct(dat, title2, .keep_all = TRUE) #reduce to records with unique titles
(removes exact duplicates)
 
dim(dat2) #see the new number of records
#View(arrange(dat2, title2)$title2) #an optional visual check - sorted titles

 
8. Deduplicate by fuzzy matching the remaining titles in R.

This step uses string distances to identify likely duplicates - it may take a while for long lists of references.
 
duplicates_string <- synthesisr::find_duplicates(dat2$title2, method = "string_osa", to_lower = TRUE, rm_punctuation = TRUE, threshold = 7)
 
#dim(manual_checks) #number of duplicated records found
#View( review_duplicates(dat2$title2, duplicates_string) # optional visual check of the list of duplicates detected. If needed, you can manually mark some records as unique (not duplicates) by providing their new record number from duplicates_string (duplicates have the same record number), e.g.
#new_duplicates <- synthesisr::override_duplicates(duplicates_string, 34)
 
dat3 <- extract_unique_references(dat2, duplicates_string) #extract unique references (i.e. remove fuzzy duplicates)
dim(dat3) #new number of unique records

 
9. Prepare the data for exporting from R.

Modify the data frame into a format that can be imported to Rayyan (the files saved as .bib or .ris for .csv files cannot be directly uploaded to Rayyan due to some formatting changes happening during processing them in R). This is done by first selecting only the key columns, saving them into a BibTex format (.bib file) and them changing the record labels into the desired format.
 
dat3 %>% select(key, title, authors, journal, issn, volume, issue, pages, day, month, year, publisher, pmc_id, pubmed_id, url, abstract, language) -> dat4 #select the key columns
 
write_refs(dat4, format = "bib", file = "FILENAME_deduplicated.bib") #save into a bib file
 
readLines("FILENAME_deduplicated.bib") %>%
  stringr::str_replace(
    pattern = "@ARTICLE",
    replace = "@article") %>%
  writeLines(con = " FILENAME_deduplicated.bib") #fix the record labels and save again as a .bib file
 

10. Import deduplicated records into Rayyan.

Create a new project in Rayyan and import the modified .bib file. Run the algorithm for detecting duplicates in Rayyan (see Point 3 above). This will reveal potential duplicates that were below the similarity threshold used in R (or have lots of formatting differences). These will need to be resolved manually in Rayyan (usually it is not a big number and some will require human intelligence to tell what counts as a real “duplicate”). After resolving these duplicates you are ready to start screening your deduplicated records in Rayyan.
 
Note: Unfortunately, record fields with authors and keyword information (and many other fields) are stripped from the original records in the above workflow, mostly by Rayyan. For this reason, records exported from Rayyan are usually not suitable for direct use in bibliometric analyses. But, at least, you can claim that your screening of bibliographic records in Rayyan was blinded to the authors’ identity.
0 Comments
<<Previous
Forward>>

    Author

    Posts are written by our group members and guests.

    Archives

    May 2025
    April 2025
    March 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    July 2023
    June 2023
    May 2023
    April 2023
    March 2023
    February 2023
    January 2023
    December 2022
    November 2022
    October 2022
    September 2022
    August 2022
    July 2022
    June 2022
    May 2022
    April 2022
    March 2022
    February 2022
    January 2022
    December 2021
    November 2021
    October 2021
    September 2021
    August 2021
    July 2021
    June 2021
    April 2021
    March 2021
    February 2021
    January 2021
    December 2020
    November 2020
    August 2020
    July 2020
    June 2020
    April 2020
    December 2019
    November 2019
    October 2019
    September 2019
    June 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    September 2018
    August 2018
    July 2018
    June 2018
    May 2018
    March 2018
    January 2018
    December 2017
    October 2017
    September 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    January 2017
    October 2016
    August 2016
    July 2016
    June 2016
    May 2016
    March 2016

    Categories

    All

    RSS Feed

HOME
PEOPLE
RESEARCH
PUBLICATIONS
OPEN SCIENCE
OPPORTUNITIES
LINKS
BLOG

Created by Losia Lagisz, last modified on June 24, 2015