by Hamza Anwer
Before I started working with the I-DEEL lab (first as an intern and now as a PhD student), I was in the dark regarding this little species of fish, Danio rerio aka the zebrafish. Indigenous to South Asia, they are broadly distributed across parts of India, Bangladesh, Nepal, Myanmar, and Pakistan.
The Garvan Institute of Medical Research is host to a quality Biological Testing Facility which houses zebrafish. Over the last several months, I have gained a newfound admiration and respect for just how fascinating these fish really are. Most commonly used in biomedical studies as a vertebrate model for studying genetics and development, they are also robust enough to answer various questions in evolutionary biology, particularly with respect to behaviour. They can learn complex tasks; remember; and display strong social cohesion.
We’ve been running various experiments regarding how they respond to various stimuli to better understand their seemingly complex cognitive processes. One such experiment involves learning how they respond to a novel environment while in an anxious state. Safe to say, my journey with zebrafish has only just started, and I’m excited at the prospect of seeing what else these amazing fish have to offer and learning more in the process.
by Alex Aloy
David Moore’s Developing Genome is almost comparable to popular non-fiction books that I have read in the past. Conveniently and witfully written as Jared Diamond’s Guns, Germs and Steel or Tim Flannery’s best-seller about the world’s fate in the future on the brink of a changing climate in The Weathermakers. There is a clever “ladder-like” introduction of biological terminologies as you go along reading without being overly technical and scientifically jargonised.
The book exhibits a perfect balance of introducing hard facts and concepts while citing related synonymous daily life scenarios. A literal example is how Moore tried to illustrate certain sections of the DNA strand into familiar household terms such as a “yarn spool.” I like how he put humorous connotations likening DNA processes like expression of a resulting trait or cellular product to “a party that came altogether spontaneously.” Overall, the writing style made reading the book enjoyable and interesting at least for the uninitiated with the subject topic.
Developmental biologists: epigenetics rings a bell
The word “epigenetics” was first introduced by Conrad Waddington in the 1940s to refer to the intrinsic interaction between genes and molecules at the cellular level, leading to phenotypic outcomes. However contemporary scientists would extend this definition as the interaction of the genetic (nuclear) material to extracellular environmental factors that are spatially situated “on top” of the genome (thus the epi word).
The book provided a good literature review, mostly over the last decade, on the advances of epigenetic research. Most of these highlighted the role of how genes (and consequently traits) are expressed or repressed in clinical psychology and bio-medical experiments. Curiously, the process is not gene-independent but is largely influenced by its interaction with the environment. Specific examples range from serious diseases/disorders (e.g. Prader-Willi, Angelman syndromes) to fur coloration pattern in cats. In addition, a number of important biological development processes such as female human X-chromosomes inactivation have been linked to epigenetic causes. So developmental biologists knew this concept right from the start!
Developmental stage is important because environmental effects influence the expression of genes mostly in early life. In practice, if the goal is to intervene or reverse these phenotypic expressions, the key is to look and act on these at different developmental stages. This is where epigenetics comes in to elucidate the adverse effects of early life exposure to external factors such as bad (or good) experience, memory and diet deprivation during developmental stages.
It is indeed Nature + Nurture
Turning the book’s page one at a time has been revelational. All along, I have been versed with a misconstrued (mostly during my biology undergrad days) idea that sole genetic makeup always determines what traits, physiological or even behavioural tendencies one have in later life. Apparently, this is one of the main arguments raised by Moore about the need to dismiss popular metaphorical fallacies associated with how we use and interpret the words DNA and genes. For example, genes were popularly viewed as a “blueprint system”, a “program” (computer) and even likening it to a film script.
These in a way are misleading metaphors, because they convey a “grandmaster” plan or seem to entail a pre-deterministic fate of how genes should function and develop. Epigenetic systems are much more of a two-way process where genes’ activities are largely influenced by external factors and vice-versa. To quote, “it is not what (genes) you have, but more importantly how those genes perform and work.”
The book enumerated a wealth of supporting evidence from recent and emerging research on behavioural epigenetics that link the effect of environmental factors to genetic makeup at the individual level (i.e. DNA methylation, histone modifications). There were early experiments such as investigating consequences of epigenetic states of good or bad experience (from parental care) to eventual offspring behaviour. However, it is not clear how epigenetic factors affect individuals in succeeding generations on the population level in natural environments or the mechanisms behind it.
A retrospect: Darwin’s finches
The idea that our genome continuously communicates with the environment “epigenetically” puzzles me at this stage. But one thing I realise is that isn’t this supposed to be how organisms ultimately adapt and evolve under Darwinian synthesis of evolution? This reminds me of the outcomes of some classic empirical work in ecology and evolution, such as the long-term study of Darwin’s finches by the British couple Peter and Rosemary Grant in the Galapagos Islands.
Decades of field data allowed them to observe changes in feeding adaptations exhibited as adjustments in morphology and habitat use due to an ever-changing environment. What happened really was successful adaptation and transgenerational inheritance of novel phenotypic changes, which maybe are partly mediated epigenetically.
Re-inventing Darwinian and Lamarckian views of evolution
One of the main arguments of the book is the need to revisit the universally accepted theory of evolution or Neo-Darwinism, which is based on changing gene frequencies across a particular genome in space and time. Following the recent advances in mechanistic study of epigenetics, there is a need to explore how such processes play a role in broad evolutionary biology questions, like speciation and natural selection. To quote:
“There is a need to understand the relation between heredity and on the premise that natural selection has different results across a spectrum of heritable variations. With epigenetic systems in place and recognised, a research that will integrate development and evolution would result to a more powerful Darwinian theory.”
Amazingly, this realisation brings back again (after all) Charles Darwin and his “Origin of Species”, which is compatible and complementary to the Lamarckian view. The unlikely return to Lamarckian views of evolutionary mechanisms of inheritance, which states that it should encompass “both genetic and non-genetic developmental resources”, is unanimously sensible at this point.
In the end, Moore believes that the burgeoning field of epigenetics has finally provided the mechanistic explanation to Darwin’s theory. As epigenetics becomes more prevalent and influential in so many fields, such as in medicine, pathology and law (I am hoping so!), it is predicted that it will be a “monumental” step towards how we view life. Its impact is potentially revolutionary in the sense that it transcends from the minute DNA molecules to social and cultural hierarchies of human civilisation.
The academic workforce is very mobile. A quick perusal of the current and former members of I-DEEL shows that our lab is no exception; there are links to several universities around the world. I do not experience wanderlust, so the frequent necessity to uproot one’s life to stay in academia is one of the least appealing aspects of this career path. Still, I believe there’s merit to the cliché ‘get out of your comfort zone’, so I recently defied most of my basic instincts by moving halfway around the world, to join the Hendry Lab at McGill University for one year of my PhD.
I’ve been thinking about the costs and benefits of moving institutions. On the costs side, all the logistics take up big chunks of time before, during, and after the move, which reduces productivity. Then, there are also the costs of leaving behind friends, family, and geographically-confined-interests, and the exhaustion of adjusting to a new home.
As for the benefits? The literature speaks of increased human and social capital, which simply means developing new knowledge and skills, and forming beneficial social connections. But I also think there might be a benefit through a motivation boost, like task switching, but on a large scale.
Psychologists used to think that humans had a finite amount of willpower, and if we used it up on one task (e.g., writing during the day) we would have less willpower to give to other tasks (e.g., exercising in the evening); the technical term is ‘ego depletion’. The recent surge in transparency and replication in the social sciences, however, has shown this could be false. Our willpower may be depleted if we are only doing one task, but if we switch tasks during the day, or take breaks, then we can continue to work at a productive level.
Moving institutions forces you to take a complete break from work during the actual moving phase, and then – because you are in a different place surrounded by different people – all your tasks feel slightly different. I have no idea whether there’s any merit to this idea, but there is some evidence linking mobility to productivity.
Whether or not moving is good for productivity, it is the current reality in the lives of many academics, and there are undoubtedly many personal benefits to be gained. So when I return to I-DEEL in 2018, I will hopefully bring back not only new academic skills, but also the ability to cycle through snowstorms, and an encyclopaedic knowledge of Quebec’s best cheeses.
Joel will be starting a wintery post-doc at University of Edinburgh with Jarrod Hadfield and Penny will move on to getting her qualifications to become a high school teacher!
We are so grateful for their company and work and wish them the very best for their upcoming ventures!
by Gihan Samarasinghe
We are living in the age of digitisation. Each day hundreds of Terabytes of data are being dumped into digital storage networks (World Wide Web is the largest of them all). Unfortunately, most of them are unstructured and randomly ordered. Think how much of unordered, unstructured information can be shared on Twitter and Facebook within few seconds!
Discovery of the potentially important information hidden among unstructured data stored on the internet has always been an active research topic. That is where Natural Language Processing (NLP) comes to aid. NLP is one of the strongest branches of Artificial Intelligence (AI), where recordings/articles containing spoken or written human languages are analysed by computer algorithms to extract and represent useful information in a structured manner.
Role of Text Mining
There are many data mining and machine learning techniques available for the purpose of knowledge discovery in natural language contexts, and the term text mining is generally used for the techniques used to analyse text-based data sources. What basically happens in text mining is the statistical prediction of the relevance of a text-based data instance with respect to a particular subject of interest, either based on prior learned inter-relations of sets of keywords or predicted correlations among newly seen word groups. Those predictions are usually represented as lexical, syntactical and semantical analyses of text contents.
Can Text Mining Benefit from Deep learning?
The idea and evolving implementations of the family of machine learning techniques called Artificial Neural Networks (ANN) existed since the 1940s. ANN algorithms imitate and are influenced by the neural networks of animal brains. Traditional classification techniques in machine learning need a set of pre-defined features to learn from a presented instance (the process called Training). ANNs, on the other hand, contains several sequential middle layers (called hidden layers because the feature definitions are unexplained and fuzzy in these layers), that defines the features by themselves by observing the important aspects of training instances.
The rate of learning depends on two factors mainly: (i) the number of hidden layers and (ii) number of training instances. Until the early 2000s, the vital bottleneck was the unavailability and the higher cost of computational power and storage. When these issues were addressed with revolutionary hardware inventions such as fast computations using Graphical Processing Units (GPUs), faster digital read and write on Solid State Drives (SSDs), etc, ANNs also gained a rapid enhancement. This was the beginning of the popular term Deep Neural Networks, that is nothing more than ANNs with a larger number of layers (deeper) and hence a better learning rate. Day by day, various fields have benefitted from deep neural networks with the availability of millions of data and vast computational power. Meanwhile, Deep Neural Networks themselves are evolving with optimised algorithms and frameworks, therefore becoming more and closer to the way how human brain thinks and understands things (although machine learning is still far behind our brain). The following figure gives a nice overview of how Deep Learning becomes better when there is more data to train with, and capability to handle more data.
Text mining couples with deep neural networks brilliantly, due to the nature of the understanding needed in text context classification and knowledge discovery. Deep learning-based text mining techniques have become quite popular and successful in self-discovery and interpretation of interesting features (keywords) and have already shown promising signs of becoming superior to traditional machine learning techniques.
Future of Systematic Mapping
Systematic mapping of evidence for a topic of interest, in literature, is an imperative potential usage of text mining, when large networks of millions and billions of text articles are available, and yet to be summarised or collected meaningfully. In this process, defining a proper set of keywords or search terms is a tricky task, and they are heavily subjective and ambiguous. Therefore, deep neural networks are the way to go with future knowledge hubs and structured literature collections. Not only better classification of text documents using a self-defined set of keywords (with a proper training process), but also visualising and presentation of the learned interpretations will be potentially interesting outcomes of deep learning based literature surveys.
By Dan Noble
While we are all busy, there is always time for a Pub Crawl! Recently, the I-DEEL lab – joined by other lab groups at UNSW – embarked on a night out in the town to experience Sydney’s wonderful pub and bar scene. All up, we were a group of about 15 people, including members from the Cornwell, Brooks and Kasumovic labs.
The Goals: What would be a pub crawl without a few goals? To make things fun, we set out a list of potential pubs throughout the city that we wanted to explore, with the goal of hitting at least 10 by the end of the evening. At each pub, we all had to enjoy a single beverage and then move on to the next. Of course, some of us drank faster than others….so that goal was achieved fairly quickly by some.
Where did we get to? Given our group size we managed to hit quite a few places by the end of the night. Our adventures started at a local favourite and the group meandered down Oxford street, sampling along the way. The list of places and some pictures include:
Unfortunately, we didn’t quite make it to our 10 pubs (we got some great photos through the evening though), but we did well and will plan to improve on our efforts on the next crawl!
By Fonti Kar
Have you ever had a lull in productivity? Days where you feel like you are not achieving very much? You are not alone. In my first year of my PhD, the thing the stressed me most was not knowing whether I was productive enough. How do I track my progress? A few weeks ago, I went to a workshop on ‘A novel framework for research productivity’ run by post-doc Khandis Blake from the Sex Lab, UNSW. Inspired by Khandis’ productivity (she completed 16 studies during her PhD in 2.5 years!), I decided to blog about this and hope someone will find this helpful with their own project management as well.
Khandis has a background of business coaching. In her workshop, she drew parallels between sales pipelines and research projects (Fig. 1). She discussed one can increase customers, revenue and profits or in research terms – the number of completed studies, submitted manuscripts and publications by working on factors that affect these key things.
For example, a business can work on converting people that walk in the store (‘leads’) into paying customers, which will ultimately increase store revenue/profits. Think store promotions or sale assistant greetings as you walk in. You can do the same with the number of completed projects by working on your leads. This can be the number of collaborations you have or own ideas you’ve identified from reading. You can focus on converting these ideas into completed projects by recruiting help with data collection (e.g. interns and student volunteers), or use a more efficient way to test your idea (e.g. theoretical models), or perhaps the data already exists and all you need to do is to put these together (e.g. meta-analyses).
Now, once a business has converted a lead to paying customer, one can increase the number transactions per sale, which will increase revenue. Think “Would you like to have fries with that?”, a line we are all too familiar with. The research equivalent of this is the idea of publication frames i.e. number of manuscripts one can address in a single project. Can you partition your data in multiple ways to test multiple hypotheses? Can you collect just a little bit more data (with minimal effort) so you can address another interesting question? Depending on the of results, can you segment these to tell more than one cohesiveness story?
Finally, to ultimately increase the number of publications– your manuscripts need to be submitted. Your success rate depends on a range of factors, some of which are not in your control (e.g. time in review) but you can increase your chances by making sure the story is clear, concise and well written; stick to journal guidelines; a fast turnaround with revisions or resubmissions to another journal.
But there is a catch with the research pipeline…
The lag in research
Khandis emphasised that the research pipeline is long one. The time from a conceived idea to data collection, to manuscript submission and submitted and acceptance is LONG. For example, I started the data collection in early December 2014 for a paper that was accepted earlier this month (3-year pipeline!!!). This means that there is always a lag in productivity and in order to avoid lulls, here are some tips:
Research projects – especially PhDs – can be long and demanding journeys, but with a clear pipeline in mind, one can hopefully navigate this path with a bit more ease and come out on the other side with a few more papers under your belt. Good luck!
Six years ago, I wrote a course manual for biology students learning statistics using R. I wanted to publish this manual as a book, but 6 years passed since that resolution. Two months ago, I finally decided to accomplish this aim, and I am actually writing the book. I have already finished the first 2 chapters!
I did not want to create yet another textbook for stats (there are already a lot of good R stats books). Rather, the main reason for writing this book was my desire to join great English playwrights, like Bill Shakespeare. Yes, my stats book is a play! It is structured as conversations between 6 people. I give you an excerpt here:
Originally, I was planning 12 chapters (my original course manual had 7 chapters), but now I plan to have 15 chapters (and maybe plus 5 appendix chapters...). I have to update the 7 already written chapters, because R has moved on a lot. Now, we have many new super-nice packages (e.g., the tidyverse family). So, in the updated version of the book, I use ggplot2, dplyr, tidyr and readr instead of the base functions from the basic packages. I am learning so much on the way! I just found a package called ‘GGally’ (the ally of ggplot2) - it is so great! I have redrawn the original scatterplot matrix (house sparrow morphological data) with ‘ggpairs’ function instead of the ‘pairs’ function. Doesn’t it just look stunning?
When will I finish? I’ve told my publisher that I want to finish the whole book by the end of year... Let me aim for that – still 13 chapters to go!
by Malgorzata (Losia)Lagisz
We had a very special Saturday on 22 of April 2017, with two special events. In the morning, we took part in March for Science organised in Sydney. In the afternoon, we watched an AFL (Australian Football League) game, which seems to have nothing to do with science, but it has a link to our group.
March for Science felt like a good place to go. It was a colourful and peaceful gathering of people who recognise and appreciate science,who are concerned about politicians ignoring science, and who see the need for evidence-based decision making. No need to mention, members of our lab attended this important gathering.
Photos: Marching for science
The AFL game was important to us for a different reason. It was not that Sydney Swans played against the Giants – we did not particularly favour any of these two teams. It was our first live AFL game. And it was mostly about watching Rose. Rose is an Umpire (a judge), she is one of only 4 female Umpires in AFL, and she also is a member of our group, doing a PhD at UNSW. We are very proud of her!
Photos: Rose in action. Watching the game.
We hope that what we did on Saturday matters. And the big thanks for the AFL tickets go to Rose!
Posts are written by our group members and guests.