Methods |
Corresponding author: Dominik Buchner ( dominik.buchner524@googlemail.com ) Academic editor: Kat Bruce
© 2021 Dominik Buchner, Peter Haase, Florian Leese.
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Citation:
Buchner D, Haase P, Leese F (2021) Wet grinding of invertebrate bulk samples – a scalable and cost-efficient protocol for metabarcoding and metagenomics. Metabarcoding and Metagenomics 5: e67533. https://doi.org/10.3897/mbmg.5.67533
|
Most metabarcoding protocols for invertebrate bulk samples start with sample homogenisation, followed by DNA extraction, amplification of a specific marker region, and sequencing. Many of the above-mentioned laboratory steps have been verified thoroughly and best practice strategies exist, yet, no clear recommendation for the basis of almost all metabarcoding studies exists: the homogenisation of samples itself. Two different categories of devices are typically used for homogenisation: bead mills or blenders. Both have upsides and downsides. Bead mills rely on single-use plastics and therefore produce a lot of waste and are expensive. In addition to that, processing times can go up to 30 minutes making them unsuitable for large-scale studies. Blenders can handle larger sample volumes in a shorter time, and be cleaned – yet suffer from an increased risk of cross-contamination. We aimed to develop a fast, robust, cheap, and reliable sample homogenisation protocol that overcomes limitations of both approaches, i.e. does not produce difficult to discard waste and avoid single-use plastics while reducing overall costs. We tested the performance of the new protocol using six size-sorted Malaise trap samples and six unsorted stream macroinvertebrate kick-net samples. We used 14 replicates per sample and included many negative controls at different steps of the protocol to quantify the impacts of i) insufficient homogenisation and ii) cross-contamination. Our results show that 3-min homogenisation is sufficient to recover about 80% of OTUs per sample in each replicate and that a non-hazardous DIY cleaning solution provides an effective and efficient way of cleaning. The improvements of the protocol in terms of speed, ease of handling, an overall reduction of costs as well as the documented reliability and robustness make it an important candidate for sample homogenisation after sampling in particular for large-scale and regulatory metabarcoding but also metagenomics biodiversity assessments and monitoring.
bioassessment, biodiversity soup, bulk sample, community metabarcoding, DNA isolation, LTER
DNA metabarcoding is an efficient tool to characterize invertebrate species composition in environmental samples. Starting material can be very different and include flying insect samples, soil samples, benthos, or plankton samples (
Many of the above-mentioned laboratory steps have been verified thoroughly and best-practice strategies exist. For example, different extraction protocols have been analyzed (
Sample homogenisation has been done without a drying step (
For sample homogenisation, different devices are used. They can be divided into two main categories: bead mills and blenders. Bead mills work by accelerating small, hard particles in a closed container like a falcon tube to break down tissue into small fragments. Blenders work with a rapidly rotating blade that slices the tissue.
While most bead mills rely on single-use plastics, blenders offer the option to be cleaned. But both methods have downsides to consider: while single-use plastics are ideal in terms of avoiding cross-contamination they produce a lot of waste and are expensive compared to the costs of other parts of the workflow. Prices may vary but go up to 15 € per sample. For sufficient homogenisation, runtime varies from 2 up to 30 mins (
Blenders on the other hand can handle large sample volumes more easily (e.g. 600 ml in
Here, we aimed to develop a fast, robust, cheap, and reliable sample homogenisation protocol that overcomes the above-mentioned limitations of both methods, i.e. does not produce difficult to discard waste and avoid single-use plastics. We tested the performance of the new protocol using six sorted Malaise trap and six unsorted stream kick-net samples. We used 14 replicates per sample and included many negative controls at different steps of the protocol to quantify the impacts of insufficient homogenisation and cross-contamination.
The design of this study is summarized in Fig.
Schematic overview of the study design. A) 6 biological replicates of both sample types (stream kick-net sample, Malaise trap sample) were used in this study. B) Each of the samples were homogenized in the blender which was cleaned 1 – 6 times afterward by letting it run for 20 s with either ddH2O (blue drop, kick-net samples) or self-made decontamination solution (DIY-DS, green drop, Malaise trap samples). After cleaning, the blender was filled with EtOH to create a blender negative control. C) Each sample, as well as each blender negative control, was replicated 7 (extraction replicates) times in 2 ml tubes. At this stage, 12 additional tubes were added that never had contact with the blender to be able to distinguish possible contamination from the sample homogenisation from contamination that occurred in the downstream analysis. D) The samples were then transferred into two 96-well plates, which were replicated once more (technical replicates), to distinguish between contaminations that might have happened at stage C) from contamination that might have happened after stage D).
Two different samples types were used in this study: (i) six unsorted stream kick-net samples from a study conducted by
Samples were homogenized in a common kitchen blender (Mini Blender & Blender Smoothie, Homgeek, China) at 25,000 RPM for 3 min together with the preservation liquid. To reduce heating of the samples, samples were cooled to -20 °C prior to homogenisation. After homogenisation, samples were transferred back to their respective collection container and stored at -20 °C until DNA extraction. The blade and container of the blender were cleaned with ddH2O until no remainders of the sample were visible. After that, the container was filled with either 100 ml of ddH2O or self-made decontamination solution (DIY-DS, 0.6% bleach, 1% NaOH, 1% Alconox, 90 mM sodium bicarbonate, Suppl. material
Before tissue lysis, the two size fractions of the Malaise trap samples were pooled in a 1:5 ratio (large-small, 5 ml and 25 ml) as suggested by
All subsequent processing steps were completed on a Biomek FXP liquid handling workstation (Beckman Coulter, Brea, CA, USA). 60 µl of tissue dissolved in TNES buffer was taken out twice of every tube and mixed with 133 µl TNES and 7 µl Proteinase K (10 mg/ml) and digested for 3 h at 55 °C. From this point onwards, the plates containing replicate samples (see Fig.
To control for possible contamination of the negative controls, all samples were amplified in a quantitative PCR (qPCR) in 20 µl reactions containing 1× perfeCTa FastMix, 300 nM of each primer (fwh2F, fwhR2n (
The PCR for the metabarcoding library was done in a two PCR step protocol (
In the second PCR, samples were amplified with the Qiagen Multiplex Plus Kit with the same final concentrations except that 1 µl of first step PCR product was used as a template. For amplification the following protocol was used: initial denaturation for 5 min at 95 °C, 25 cycles of 30 s denaturation, and 60 s of combined annealing and extension at 72 °C finished with a final elongation for 10 min at 68 °C. In the second PCR, each of the 96 wells was individually tagged so that the combination of inline-tag from the first PCR step and index-read of the second step yields a unique combination. The success of the PCR was visualisation on a 1% agarose gel.
PCR product concentrations were normalised using the SequalPrep Normalisation plate (Invitrogen, Carlsbad, CA, USA). Normalised products were then pooled to the final library in equal parts for all samples. The library was concentrated using the NucleoSpin kit (Macherey Nagel, Düren, Germany) and dual-sided size selected (right ratio: 0.6; left ratio: 0.75) with the NucleoMag size-select kit (Macherey Nagel, Düren, Germany). Library concentration was quantified on a Fragment Analyzer (High Sensitivity NGS Fragment Analysis Kit; Advanced Analytical, Ankeny, USA). The library was then sequenced using the HiSeq X platform with a paired-end (2×151 bp) kit at Macrogen Europe.
For analysis of the qPCR results, raw fluorescence values were exported from the instrument and baseline-corrected with the LinRegPCR software (
Raw data of the sequencing run yielded 638,892,616 reads and was delivered demultiplexed by index-reads. Index jump (sensu
To control for contamination on the robotic deck, technical replicates of the plates were merged retaining only the mean read number of both replicates if reads were present in both of the replicates. After that, the maximum number of reads for each OTU in all additional negative controls was calculated and subtracted from all reads of the respective OTU to remove noise introduced by the laboratory workflows resulting in a cleaned read table (Suppl. material
To compute the similarity between samples the Jaccard index was used (
No identifiable parts of the animals were left after 3 min of homogenisation. However, particle size was, overall, coarser for the Malaise trap samples (Suppl. material
All invertebrate samples were amplified successfully during qPCR analysis at first try in both technical replicates (Suppl. material
Sequencing yielded 110,641,213 reads that could not be assigned to any of the index combinations used in this study. Only 15 of these reads had a combination of the used twin-indices resulting in a very low index jumping rate of 2×10-8.
Mean OTU richness across all seven replicates ranged from 48.71 (45–50) to 74.00 (67–83) for the kick-net samples. For the blender negative controls that were rinsed with ddH2O, the mean richness ranged from 0.14 (0–1) to 1.71 (0–6). Regarding the blender negative controls, none of the OTUs was found in all 7 replicates. For the Malaise trap samples mean OTU richness ranged from 293.57 (279–301) to 446.71 (437–456). For the blender negative controls rinsed with DIY-DS, the mean richness ranged from 1.14 (0–4) to 4.00 (0–20). None of the OTUs was found in all 7 replicates either (Fig.
Mean OTU richness for kick-net samples (top left panel) and Malaise trap samples (top right panel). Mean sum of all reads across all 7 replicates for one sample for the kick-net samples (bottom left panel) and the Malaise trap samples (bottom right panel). The sample number also indicated the rounds of cleaning after each sample. Error bars indicate the 95% confidence interval ranging from percentile 2.5 to 97.5.
The mean number of reads per sample was overall higher for the kick-net samples than for the Malaise trap samples (1.28×106–1.72×106 vs. 7.31×105–1.04×106). The mean number of reads in the blender negative controls was overall lower for the DIY-DS treatment than for the ddH2O treatment (192 vs. 20,718). The mean read numbers were largely influenced by one OTU having a high number of reads for only one of the mixing negative controls (Fig.
Mean Jaccard similarity between the 7 extraction replicates was overall high for both sample types (kick-net samples: 0.81 vs. Malaise trap samples: 0.84) with the spread being higher for the kick-net samples (0.58 – 0.94) in comparison to the Malaise trap samples (0.78 – 0.9) mainly due to sample number 5 of the kick-net samples (Fig.
Pairwise comparison of extraction replicates. For each pair of extraction replicates within one sample the Jaccard similarity was computed for the kick-net samples (left panel) and the Malaise trap samples (right panel).
Rarefaction analysis for the extraction replicates of samples (Fig.
Rarefaction analysis of the technical replicates of each sample (upper row: kick-net samples, lower row: Malaise trap samples). Samples were randomly drawn 1000 times without replacement to generate the distribution. The yellow dashed line indicates 80% of the maximum possible value, the red dashed line indicates 90% of the maximum possible value.
Our study aimed to develop and test and improved invertebrate homogenisation method that is easy to apply, robust, and reliable, while being cost- and time efficient. Blenders are already used for species homogenisation in metabarcoding studies, however, none of those met the above mentioned criteria.
A central concern for bulk sample metabarcoding using blenders rather than single-use plastics is the risk of cross-contamination. The approach we present minimizes this risk effectively due to three points: i) pipetting homogenized samples in ethanol (wet grinding) limits the risk of electrostatic charge and thereby ‘jumping’ specimens. ii) Both tested cleaning procedures, i.e. cleaning with ddH20 and the DIY-DS, proved to be highly effective. While we sporadically saw that some blender negative controls contained low read numbers of single OTUs, this was never the case for all 7 extraction replicates of the given sample. This suggests, that the contamination did not happen in the blender (or only by one of the few left molecules). This was further confirmed by the observation that some of the OTUs found in the blender negative controls were not found in the sample processed before. Furthermore, the DIY-DS reduced this already sporadic and low contamination even further and is thus recommendable. iii) The stringent replication scheme, i.e. performing extraction and downstream analysis twice in physically independent plates that are never open on the benchtop at the same time, further limits the possibility for cross-contamination. This allows to control for low-level cross-contamination by accepting reads or OTUs / ESVs that are only found in both replicates.
For the two sample types analyzed here, stream benthic macroinvertebrates and insect Malaise trap samples, we observed a high consistency between extraction replicates with typically 80% or higher OTU overlap among replicates (Jaccard similarity). Stream invertebrate kick-net sample 5 was an outlier sample with only 60–70% overlap. The reasons for the lower overlap can be insufficient blending of the sample or independent replicate contamination. Independent contamination seems unlikely, as this was not observed in any of the blender negative controls performed between samples. Microscopic inspection of homogenized tissue did also not indicate systematic differences between sample 5 and all others (Suppl. material
To further improve replicate consistency, i.e. maximize the overlap between replicates, it might be beneficial to first perform lysis on a large fraction of the sample and then perform downstream analysis using two (or more) replicates. Alternatively, and in particular, when the aim is to recover the maximum of species diversity, more extraction replicates should be performed. However, our analysis shows that we already detected 80% of the OTUs found in all 7 extraction replicates with a single replicate.
Invertebrate assessment and monitoring using bulk DNA metabarcoding or metagenomics require fast, reliable, and validated protocols that are ideally economically competitive and environmentally friendly (
With this container, wet grinding using ethanol preserved samples can be done fast and reliable even with large volumes of 500 ml. We could process about 30 complete bulk samples per 8 h with one person and one blender. Thus, the approach based on wet-sample grinding is not only technically feasible, scientifically reliable, economically competitive, and environmentally friendly, it offers great speed and is scalable allowing for large-scale DNA-based biomonitoring.
Demultiplexed raw read data for this publication has been uploaded to Zenodo.org and can be accessed via 10.5281/zenodo.5039930.
We thank the Leese lab members, especially Martina Weiss and Arne J. Beermann for comments and feedback on the study design. We thank Alexander Klenov (www.pipettejockey.com) for valuable input on the DIY-DS. FL is a member of and supported by COST Action DNAqua-Net (CA15219). This project was conducted as part of DFG project LE 2323/9-1. PH received funding from the eLTER PLUS project (Grand Agreement No. 871128).
Protocol 1 – DIY-DS
Data type: text
Table S1. PCR primers used in this study
Data type: text
Table S2. Raw read table
Data type: text
Table S3
Data type: text
Script 1
Data type: code
Figure S1. Pictures were taken with a digital microscope (Keyence VHX-6000, Keyence, Osaka, Japan)
Data type: image
Explanation note: Top row: Kick-net samples. Bottom row: Malaise trap samples.
Figure S2. Baseline-corrected amplification curves (left half) and melting -curves (right half) for A & B) kick-net samples and C & D) malaise trap samples
Data type: statistical data
Explanation note: Different colors indicate different technical replicates of the same sample.
Figure S3
Data type: image