Quantcast
Channel: ExpertCytometry
Viewing all 298 articles
Browse latest View live

Experimental Controls For Reproducible Flow Cytometry Measurements

$
0
0

With the increased focus on reproducibility of scientific data, it is important to look at how data is interpreted. To assist in data interpretation, the scientific method requires that controls are built into the experimental workflow. These controls are essential to minimize the effects of variables in the experiment so that changes caused by the independent variable can be properly elucidated. In fact, one of Begley’s 6 rules, as described by Bruce Booth, asks if the positive and negative controls were both shown.

What types of controls should be considered when designing a flow cytometry experiment?

Focus controls to minimize confounding variability. Sample processing, for example, can be controlled using a reference control. Where to properly set gates can be addressed using the FMO control. Controls for treatment can include Unstimulated and Stimulated controls. Reagent controls ensure that the reagents are working, and are at the correct concentration. Compensation controls are critical — these have been discussed in detail elsewhere. Of course, there are some controls that do not actually control for what they are used for, such as the isotype control.

1. Reference controls.

The purpose of a reference control is to determine if the process — from sample preparation through staining — has been performed consistently. It also allows for a reference range to be established that reflects the inherent variability in the preparation process.

Identifying a reference control is an important step in the panel design/validation process. This control should be readily accessible: for example, a large number of frozen PMBCs from a single source, or a defined mouse strain.

This sample must also reflect the expected staining pattern in sufficient detail to allow for verification that the antibodies properly labeled the targets.

When staining an experimental sample, the reference control is also stained.

If it behaves differently than you would expect on common plots, then there is likely a problem with the experiment and you need to troubleshoot.

It’s a great indicator of the health of your experiment. An example of this data is shown in Figure 1.

Figure 1: Tracking the results of staining a reference control.

This figure shows the results of 8 independent experiments, with the mean and SD shown. In the case of outliers, 2 examples shown by red arrows, it is critical to identify the root cause of the reason for the variation.

An added benefit of the reference control is that it can be used as a training tool for new users.

Since the expected range is known, having them stain the reference control helps them gain confidence in their technique.

Before the reference control is used up, it is critical to perform an overlap experiment. Run the new control 3-5 times in parallel with the old control to determine the differences between the old and new control ranges. Don’t forget to document!!!

2. Fluorescence minus one control.

A very important control for data interpretation is the fluorescence minus one, or the FMO, control. This is a gating control that is used to identify positive from negative. It is designed to reveal the spread of the data as it addresses the contribution of error measurements to the channel of interest from all the other fluorochromes in the panel.

As the name implies, cells are stained with all fluorochromes in the panel, except the one of interest. An example of this is shown below, in Figure 2.

Figure 2: FMO control in a 5-color panel to identify the proper placement of gates.

The red-dashed line represents the unstained boundary for the data. The middle panel represents the FMO control.

The staining above the red line implies that those cells are positive for the PE marker. However, since that tube doesn’t have PE, those cells cannot be positive.

The true boundary is shown by the blue line. The arrow on the far right panel shows the spread of the data — this is caused by the other fluorochromes in the panel spilling over into the channel of interest.

FMO controls are critical for setting gates, especially for rare events, emerging antigens, or any case where sensitivity is important to the measurement.

During the panel development phase, it’s good practice to run all possible FMO controls. From there, identify those controls that are essential for identifying the target cells, and run those with every panel.

3. Unstimulated control.

When performing a stimulation experiment, it is valuable to run both a stimulated and unstimulated control.

The stimulated control should be cells treated with a very powerful stimulant. This ensures that the cells can be stimulated, that the reagents are working, and it provides an upper limit for expected results.

The unstimulated control is also critical. In this case, the cells are not stimulated so that background signal can be identified. Shown here, is data from the 2006 Maecker and Trotter paper. This figure shows SEB stimulated cells, looking at CD4 expression on the y-axis and IL-2 production on the x-axis.

Figure 3: Controls for stimulation experiments. From Maecker and Trotter (2006) Figure 3.

The fully stained sample is shown at the top, and the FMO control is in the bottom middle and reveals the spectral contribution of the other fluorochromes in the panel to the PE channel. On the right is the isotype control, but more on this topic later.

The unstimulated sample, on the left, should have no IL-2 PE positivity.

Starting with the FMO control, and adjusting for the background staining of the IL-2 antibody using the unstimulated sample, allows for correct gate placement.

4. Isotype control.

The isotype control has been used in flow cytometry for many years. The theory behind this control is that non-specific binding of a given antibody isotype can be determined using an antibody of the same isotype as the antibody of interest, but to an irrelevant target.

For example, if your antibody of interest is a mouse IgG1, κ, clone MOPC-21 is an appropriate isotype control. The problem is that MOPC-21 has been around since the 1970s and the target is still unknown. This illustrates the assumptions that are made when using isotype controls

  1. The isotype control has the same affinity and characteristics for secondary targets as the original target antibody does.
  2. There are no primary targets for the isotype control to bind.
  3. The fluorochrome-to-protein ratio is the same on the target antibody as it is on the isotype control.

Historically, isotypes have been used to set gates and determine positivity. However, since the answers to these 3 questions are unclear, it is not a true control, but rather another experimental variable.

Thus, the isotype control is not an effective or worthwhile control, and you are better off focusing on other controls.

In the 2006 Maecker and Trotter paper, the authors showed the following figure (Figure 4, left panel).

Figure 4: Isotype Control data.

The cells, “small lymphocytes”, were identified by scatter characteristics and the staining of 3 different isotype controls is shown. The red line is added for emphasis.

More recently, a paper by Andersen and colleagues attempted to identify the best methods for blocking their cells of interest. The first figure shows the results of staining for a known target (Tie-2) on the surface of the cells of interest. The corresponding isotype control staining is also shown and, based on that staining, the interpretation of the data would be that the cells do not express Tie-2, which is known to be false.

Expanding on this finding, the authors attempted to identify the best blocking strategy for their target cells. The results of this data are shown in Figure 5.

Figure 5: Results of testing different blocking reagents.

The authors compared the Median Fluorescence Intensity (MFI) of the unstained cells in the channel of interest in the absence of an isotype control to the MFI of the cells stained with the appropriate isotype control.

Different blocking strategies were performed and the cells stained with the isotype control. The results suggested that Human IgG was best for blocking, due to low cost and stability.

5. Reagent Controls

A. Titration

One important experimental control is to validate the amount of antibody being used for staining. If too much antibody is used, there will be an increase in non-specific binding, reducing sensitivity. Too little antibody, and the cells are not saturated — again, resulting in reduced sensitivity.

The best way to determine the optimal antibody concentration is to perform a titration experiment. In a titration experiment, you vary the amount of antibody used in staining, while holding other variables — incubation time, temperature, and cell concentration — constant. After acquiring the data, calculate the staining index for each concentration. An example of a titration experiment is shown below in Figure 6.

Figure 6: Example of antibody titration.

The plot on the right of concentration vs staining index shows that at low or high antibody concentrations, the SI decreases. The boxed region between is the optimal staining range. Splitting the difference between the 2 shoulders provides a best recommendation for the antibody concentration to use.

B. Isoclonal control

The isoclonal control was originally published to demonstrate that the cells of interest were not binding the fluorochrome on the antibodies, as has been shown for CD64. The isoclonal control is a great way to show that you have specific binding.

To perform the isoclonal control, mix unlabeled antibody of the same clone to compete with the binding of the original antibody. As shown in Figure 7, increasing the ratio of unlabeled antibody results in a decrease in staining.

Figure 7: Isoclonal control demonstrating specific binding.

In conclusion, getting into the mindset to improve the reproducibility of flow cytometry experiments requires a hard look at the appropriate controls to use in each experiment. These controls are essential tools for proper data interpretation, and should be referred to in any communication about the data and shown in supplemental figures at a minimum. Further, consider showing all the data by uploading it to the FlowRepository.

In the end, it is in everyone’s interest to provide the best data, with all the necessary information, to reproduce and expand the findings. As Isaac Newton said, “If I have seen further than others, it is by standing on the shoulders of giants.” That is how science makes progress.

To learn more about Experimental Controls For Reproducible Flow Cytometry Measurements, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training


Instrument Quality Control For Reproducible Flow Cytometry Experiments

$
0
0

The NIH has released a series of reproducibility guidelines that scientists must address. These guidelines have been introduced because there’s a lot of data showing that reproducibility in science is frustratingly lacking in certain experiments.

Non-reproducible experiments waste time, money, and resources. As Begley and Ioannidis cite in their 2015 article :

“The estimates for irreproducibility based on these empirical observations range from 75% to 90%. These estimates fit remarkably well with estimates of 85% for the proportion of biomedical research that is wasted at-large.”

Reproducibility is a mindset, and involves an overall analysis of the scientific process to identify the areas that can be improved. In this article by Bruce Booth, he reviews “Begley’s six rules”. Two of these rules focus on the controls and reagents used in the experiment.

Of equal importance are the instruments used to make measurements. For example, how often are the pipettes calibrated? Are all lab members adequately trained in technique? This chart from Gilson is a useful one to have in the lab as a reminder on proper pipetting form.

The flow cytometer is an integral component of any flow cytometry experiment, and special attention should be paid to ensuring that it is working correctly and consistently. As an end-user, the researcher should be able to sit down at a machine and know that it is performing the same way today as it was yesterday and last week.

Equally important is that if any changes in instrument performance have occured, the end-user knows how they have been addressed and corrected, rather than letting them fester and potentially affect the results.

For those using core facility equipment, it is important to talk to the people who maintain your instruments and look at the quality control data.

Ask them how are they are assessing and maintaining the quality of the instrument and ask what is the best way for you to make sure your data is consistent. The staff will be delighted to to advise! Quality control measurements can include a variety of targets, such as PMT sensitivity, laser alignment, fluidic stability, background issues, and more.

1. PMT Voltage Optimization.

With the analog flow cytometers, researchers were taught to run an unstained cell sample and adjust voltages so that the background was in the first log decade. This outdated practice continues to be taught, even on new digital instruments.

A better approach is to have optimized PMT voltage settings, and to use these voltages for experiments.

In 2006, Maecker and Trotter published a paper on how to determine the optimal voltage for a PMT. In this case, the second peak of the Spherotech 8-peak Rainbow Calibration Particles (RCP-3-5A-2) is run over a voltage range for each PMT. The robust coefficient of variance (rCV) is calculated for each PMT at each voltage, and a curve is plotted, as shown in the figure below.

Figure 1: Peak-2 characterization of PMTs.

The resulting curve shows a steep descent where the rCV decreases as voltage increases. Eventually, an inflection point is reached where the slope of the line changes. At this point, increasing voltage doesn’t improve the rCV.

The optimal voltage range is defined as the region just after the inflection point. In the case of Figure 1, this is between 400 to 450 volts.

When running a one-off experiment, set the voltage for the detectors to these voltages and begin collecting data. There are very few reasons to deviate from this starting voltage.

The most common situation is if the fluorescent signal on your target cells is out of the linear range of the detector, or off-scale. In this case, one might consider reducing the PMT voltage, but a better approach would be to reduce the fluorescence intensity by staining with a mix of labeled and unlabeled antibody.

When doing this, you keep the total antibody concentration constant, but now the cells will have a lower total fluorescence due to the presence of unlabeled antibody.

However, peak-2 beads are not cells, and don’t necessarily have the same spectral characteristics as the fluorochromes used in the experiment.

If the panel is going to be used consistently over a long period of time, the optimal voltage can be refined by performing a voltage titration (voltration) experiment using stained cells.

An example of this secondary optimization is shown in Figure 2. On the left, is the optimization for CY7-APC and on the right, brilliant violet 650. The peak-2 value for each of these fluorochromes is pointed out with the blue arrow. However, this peak-2 value is not always the optimal voltage for a specific fluorochrome.

Figure 2: Voltage optimization of 2 fluorochromes.

To determine if the peak-2 value is optimal, increase the voltage past this point and calculate the staining index. In the case of CY7-APC, as voltage is increased, the signal very rapidly moved out of the linear dynamic range, as shown by the red arrow, and eventually moved completely off scale. This indicates that the peak-2 value is best for this fluorochrome.

Brilliant Violet 650, on the other hand, shows an increase in the staining index above the peak-2 value. At the optimal voltage for this fluorochrome-detector pair, the SI is about 16% higher than at the peak-2 value, suggesting that this higher voltage is better for making a more sensitive measurement.

Once this voltration is performed, and the optimal voltage identified, it becomes important to maintain performance consistency over time. To do this, another bead is used. In this case, the sixth peak of the 8-peak Rainbow Calibration Particles, or any other bright bead of your choice.

Run the bead at the optimal voltages and determine the target values in each detector. This is typically measured as the median fluorescent intensity (MFI). Each time the experiment is to be run, the calibration bead is run first and detector voltages are adjusted to ensure that the MFI is maintained, within some tolerance range.

Once that is done, begin collecting the rest of the tubes. If the voltage changes are significant, it is good to pause and consult the daily QC, or your core staff, to determine if there has been a change to the system.

Defining “significant” is up the investigator, but a good rule of thumb would be that deviation of more than 10% in voltage should require some investigation. You should collect and record at least 10,000 single beads for your QC tracking.

It is important to remember that Quality Control like this is not valuable unless it is monitored. To do that, use the Levey-Jennings plot. This is a quality control plot that shows the running average of the data with control lines. In the case below, the dotted lines represent +/- 1 and 2 standard deviations around the mean.

This plot can be used to spot trends in the data. For example, if a data point falls outside of the control lines, it is a sign there may be an issue with the system, and intervention may be required.

This could just be cleaning, but may also indicate something else that the manager of the instrument needs to know. If, over time, there is a trend towards one direction or the other (continued increase and/or decrease), it is also a good time to intervene, if for no other reason than to see if other issues with the system have occured.

Figure 3: Levey-Jennings plot over time tracking PMT voltage changes using peak-6 bead method.

As shown in Figure 3, the data on this instrument over 100 days within 2 standard deviations of the mean, giving added confidence that the instrument is working consistently and changes in the data are more likely due to biology, and not instrument issues.

If these start to fall out of this range, then you would need to investigate and troubleshoot. You can be more restrictive and use only one standard deviation, but the industry standard is to use two.

While instrument QC is usually left to the manager of the instrument, adding experimental QC using an independent measurement will aid in improving the quality, and ultimately reproducibility, of the data being generated. Something may change on the instrument between when the manager checks in the morning and when you sit down to run, so it’s wise to check performance for yourself.

2. Parallel Arrangement and Fluidics Stability.

Most of the current instruments have a parallel laser arrangement. This means that the cell passes through lasers in sequence, so they are exposed to only a single excitation source at a time. To ensure that the correct signal pulses are paired together, a factor called the “time delay” is calculated during QC.

If the time delay is off, this will adversely impact the data as the cells arrive either too early or too late, thus the correct pluses are not paired together.

This can easily be identified retrospectively, but it is better to identify the issues during acquisition. If there is a loss of signal on a given excitation source, it can suggest a clog in the system that needs to be cleaned.

Consider a partial clog on the input side of the cytometer (before the flow cell). When the diameter of a pipe is reduced, that increases the flow (speed) of the liquid in the reduced diameter. This results in cells arriving at the interrogation point more quickly than the time delay is expecting them.

A clog after the flow cell has the same issue, but now the pressure builds up behind the clog, slowing down the stream, and cells arrive after the time delay window. This can be seen in Figure 4.

Figure 4: Impact of clogs on data.

When running your experiments, put up a plot of a bright fluorochrome versus time for each of the lasers that are being used on the machine. This will help determine if there are problems with the fluidics, as well as with the lasers.

In the example shown here, PE-Cy5.5 was being excited by a green laser which was the fourth in line, but there was a clog causing a back pressure issue. The clog caused a speedup of the number of events because it constricted the flow core and increased the flow rate in the number of events.

Eventually, the clog worked itself out and the signal came back. As with any quality control metric used, it’s important to not only run theses controls, but to look at them on a regular basis.

3. Compensation.

Compensation is one of most important steps in the flow cytometry process. For proper compensation it is critical to bring the right controls to the instrument every time. This is such an important topic, it has been discussed extensively here. It is here in discussing instrument reproducibility as compensation controls also give insight into how the system is working, and provide a secondary proxy for instrument QC.

These compensation controls must follow the 3 commandments:

1. Controls need to be at least as bright as any sample to which you’ll apply to your compensation. That means, the control signal needs to be at least as bright as anything in your experimental signal. If it’s not as bright, then your compensation will be incorrect. It also has to be within the linear range of the detector.

2. The background fluorescence should be the same for the positive and negative control populations for any given parameter.

Compensation is a property of the fluorochrome, not a property of the antibody or a property of the carrier that brings the fluorochrome to the intercept point. This is why you can use either beads or cells. What’s important is that the background fluorescence, the autofluorescence of the negative population and a positive are the same. Since compensation corrects to the background of the control, it is critical to have a positive and negative in each control. This means it is possible to mix beads and cells in a compensation matrix as long as each positive is linked to the appropriate negative particle type.

3. The compensation colors must be exactly matched to your experimental color.
Alexa 488 cannot be used to compensate for GFP or for FITC. Likewise, if there are two different lots of tandem dye, make sure that lots are matched in the experiment and control. One lot of PE-Cy5 cannot be used to compensate a different lot of PE-Cy5 and vice versa, because each of these dyes are manufactured, and likely have different spectral characteristics.

BONUS 4. You need to have enough events!

This is a minimum of 10,000 single events, when using beads. When using cells, you need a minimum of 30,000 single cell events. Collecting more is good, but don’t collect less than these minimums.

Since fluorescence is a property of the fluorochrome, not the carrier, the question arises, “Should you use beads or should you use cells?” Either carrier is fine, as long as the positive populations have the same background fluorescence when unstained — i.e. if you use beads, then make sure you have unstained beads and if you use cells, make sure you have unstained cells. If you have some on beads and some on cells, then make sure you pair each positive with an appropriate negative.

4. Quality Control.

Often, the QC of an instrument is left to the manager of the instrument. This can be done in a variety of ways, from an instrument-specific QC protocol or something from the manufacturer, like the BD CS&T or the Attune performance tracking beads.

This process should be run every time the instrument is turned on, to ensure that a library of instrument behaviour is developed and to allow the users to understand how the system is performing. QC of this nature is critical best practices for the instrument, and provides baseline confirmation the system is working adequately.

Don’t hesitate to ask the manager of the instrument to look at their QC metrics, and ask for an explanation when things don’t seem clear. After all, the success or failure of the experiment lies in the hands of how well the flow cytometer is working, and most cytometrists will be overjoyed to talk shop!

In conclusion, one key area to improve consistency and reproducibility is to monitor instrument quality control. Knowing how the system is QCed and what happens when something goes wrong is paramount to ensuring that the instrument is properly characterized.

To learn more about Instrument Quality Control For Reproducible Flow Cytometry Experiments, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

The Need For Speed In Flow Cytometry Data Analysis

$
0
0

Speed is a highly touted metric in flow cytometry. Look at any vendor’s website and you will see the highlights on how many events per second their instrument can acquire, how many cells can be sorted per second, and more. The limitations are imposed by the physics of flow cytometry, the speed of pulse processing, and more. With cell sorters, Poisson statistics dominate the speed calculation. As has been discussed before, the optimal sort rate is ¼ the frequency of droplet generation. Sorting faster will impact purity of the final product.

One of the trends in flow cytometry is pushing the limit of the number of parameters that can be measured at one time. The CyTOF threw the gauntlet down to start this new race by changing how the signal was detected. It didn’t take long for fluorescence-based cytometers to begin pushing past the 18-fluorochrome limit, and now instruments that can do 24 or more fluorescent parameters at the same time are available. Spectral cytometry may push this limit to 50 parameters or more in the near future.

With all these parameters, the data files become very large very quickly, and the ability to analyze such complex data becomes increasingly difficult. This has led to the desire to find analytical methods that can reduce the complexity of the data in some way to make it more manageable to find populations of interest. One of the most popular algorithms in flow cytometry circles is the tSNE algorithm. You can read more about it in these articles: van der Maaten and Hinton (2008), van der Maaten (2014), and Amir et al (2013).

tSNE allows for the visualization of high-dimensional data on a single bivariate plot. From these single plots, further analysis can be performed using other analytical techniques. However, the tSNE analysis, although powerful, is very slow and memory-intensive. In order to complete the tSNE algorithm in a reasonable amount of time, most datasets are downsampled.

Downsampling is a process where a smaller number of events is used as representative of the whole sample. This happens all time in our daily lives and generally we don’t notice it. However, if you are a true audiophile, for example, there is a difference between an electronic copy of a piece of music and hearing it from the original source.

When the data is downsampled, there is a probability that rare events will be removed from the data. Since these low frequency events are often the pieces of data the research is most interested in, the larger the sample size that can be processed, the less likely this is to occur.

This brings us back to the need for speed. The goal of our high-dimensional experiments is to identify changes in the experimental system, finding those rare events that allow for a more complete understanding of the biology. It becomes a balancing act between adding more data and keeping the overall analysis time manageable.

There are several commercially available implementations of the tSNE algorithm available on the market. The question becomes, “How fast can each of these implementations perform the tSNE analysis on a standard file, using a typical desktop computer?” In the interest of fairness, you can download the file that was used and the method for running the competition here.

The competitors in this test were: Cytobank™, FCS Express™, and FlowJo®. For those more sophisticated, and as a benchmark, the freely available R implementation of tSNE was also run.

Before the results are revealed and the winner of the first tSNE speed race is named, it is important to understand how the timing was done and the steps in each implementation. These are presented below, in alphabetical order.

Cytobank™ requires uploading the data to the cloud, where it can inform you that your data is in a queue to be processed. The timings below include both the upload and wait time (in these tests, these were under 2 minutes each, for a total of ~4 minutes). The queue waiting time is likely variable, depending on how many other people around the world have samples waiting to be analyzed by tSNE, so your mileage may vary. Cytobank™ does not require a separate downsampling step, as “desired total events” is a setting built into the viSNE (tSNE) module. Thus, the time for downsampling is automatically part of the viSNE (tSNE) calculation time itself.

FCS Express™ does not require a separate downsampling step, as “sample size” is built into the FCSE Express tSNE transformation Tool. Thus, as in Cytobank, the time for downsampling is automatically part of the tSNE calculation time itself.

FlowJo® requires installation of the DownSample plugin. To use this for tSNE analysis, the user must select the number of events to be downsampled (plotted as “sample size” in the graphs below), save the layout, wait for the downsampling to finish, and use the tSNE plugin to calculate tSNE. Downsampling time is reflected in the graph below and was ~20 seconds, regardless of the number of events. Time to save the layout was neglected.

For the tests using R, sample sizes of the original file were generated with a sample-by increment, and Rtsne (available here) was run on the sampled data. As with FlowJo, the total time (i.e., for the separate downsampling step + the time for the tSNE calculation) was graphed.

The methods and timing process are described here, along with the dataset.

Various sample sizes up to at least 300,000 were tested in all 4 software packages. Ungated plots of tSNE calculated on 100,000 events are shown in Figure 1 below. The color scaling and resolution of the FCS Express plot were changed from the default to facilitate comparison with the Cytobank plot, but this was not possible in FlowJo. Also, note that it is the nature of tSNE that results vary with each run, due to the nonlinear dimensionality reduction the algorithm performs. Don’t worry if your plots differ in appearance from those below.

Figure 1: Results of tSNE analysis on 100,000 events in the 4 software implementations. Color is based on CD90 expression (FITC label/FL-1 in the dataset).

The meat of this friendly competition was to determine which of the packages performed the tSNE analysis the fastest. The winner was FCS Express (green), followed by R (purple) and FlowJo (red), with Cytobank (blue) coming in last (Figure. 2). At 50,000 events, Cytobank’s calculation took almost 8 times as long as FCS Express; at 300,000 events, >5x as long.

Figure 2: Results of speed test as a function of sample size.

Let’s break down the head-to-head between FCS Express™ (FCSE) and FlowJo® (FJ), 2 of the most commonly available packages for most researchers. For these packages, the tests were run in triplicate. When using a sample size of 15,000 events, the processing rates were 0.56±0.03 minutes for FCSE, versus 2.53±0.12 min for FJ. FJ is over 4 times slower than FCSE! At 100,000 events, FCSE still had a dramatic lead: 4.74±0.23 minutes versus FJ’s 17.91±0.48 minutes, nearly 4 times slower.

So, why is speed of the algorithm so important? Why worry when you can just set up the analysis and go for lunch? If you’re like me when I’m analyzing data, I like to stay in that mindset. Distractions, like a long break, can impact the train of thought about the analysis. Additionally, with long run-times, it is depressing to return to the data and see the calculation stopped prematurely because of an incorrect parameter or some other error.

More importantly, the tSNE analysis is one part of the process. To fully understand the results and identify the populations of interest requires additional work, including gating and backgating. Having tSNE analysis completed 4 times faster means that much sooner to get to this additional analysis, and that one can analyze 4 times the amount of data with FCSE when compared to FJ.

Another reason that this becomes important is for rare event analysis. To ensure that the rare event population is not lost in the downsample, it is necessary to run a large file. Further, many researchers are analyzing multiple files merged from an experiment, to ensure more accurate and consistent analysis compared to single file analysis. Surprisingly, FJ was unable to complete the tSNE calculation on sample sizes larger than 200,000 events. The other 3 packages were able to complete over a million events — FCSE completed 2 million events in 4.37 hours, and R took 15.51 hours. Cytobank (using the Premium product) was limited to 1.3 million events, and that took 7.57 hours.

In conclusion, this experiment was very illuminating. Some takeaways include that the “cloud” is not necessarily better for analysis. After uploading data to Cytobank, analysis doesn’t tie up the local computer resources, which is a plus. It can also facilitate collaborations. However, it may be less expensive to invest in a more powerful local computer and take advantage of AWS or other cloud-based data storage platforms for sharing the data. If you’re facile, go with R, the free implementation, although it’s much slower than FCSE. For my time (and we know time is money), FCSE is the winner of this speed test.

Remember, if you want to try this for yourself, the data and instructions are found here.

To learn more about The Need For Speed In Flow Cytometry Data Analysis, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Best Practices In Flow Cytometry Compensation Methodologies

$
0
0

What is compensation and when should you do it?

In this first of 3 blog articles, we will discuss the principles of compensation, as well as why it is important, and how to perform compensation. Subsequent articles will discuss the rules that must be followed for proper compensation and some of the common compensation myths that permeate the field. It all begins with an understanding of the process of fluorescence.

After excitation, a fluorescent molecule emits a photon. This photon has an emission maximum — that is, the most probable photon wavelength that will be emitted. However, this emission is not so specific, and there are a range of photons that can be released from the molecule. This can be modeled with a variety of software. A typical emission profile for a common fluorochrome, fluorescein, is shown in Figure 1.

Figure 1: Fluorescein emission profile.

As can be seen from this spectra, Fluorescein has a maximal emission of about 524 nm. However, it has a very long tail, and there is a chance (albeit small) that a photon of over 600 nm can be emitted by this molecule. Fluorescence happens in the nanosecond range, so while the cell traverses the excitation source in microseconds, each fluorescent molecule has the chance to go through the excitation/emission cycle multiple times. Thus, with each cycle, the independent emission of a photon occurs.

If one is looking at cells labeled with a single particle, this is nothing to consider. It only becomes important when cells are labeled with multiple fluorochromes with overlapping emission spectra. This can be visualized using the ImageStream®, as observed in Figure 2. This data is courtesy of Dr. David Basiji.

A. Uncompensated Cells

B. Compensated Cells

Figure 2: Demonstration of the results of compensation using the ImageStream®.

Cells were labeled with 4 different fluorochromes, and run on the ImageStream®. The uncompensated data is shown on the top. After compensation is properly calculated and applied, the spillover from one fluorochrome is corrected for, and measured only in the proper channel.

This is why compensation is so important. Otherwise, it is impossible to make an accurate measurement of the single fluorochrome in the presence of multiple fluorochromes.

There are several different methods to compensate. In 2011, the New England Cytometry users group hosted a one-day meeting to discuss these concepts. You can still download the talks from this meeting at the link above. For the purposes of this article, we will consider 3 methods: Non-pensation, manual compensation, and automated compensation. The idea of spectral unmixing (spectral compensation) is gaining traction with the proliferation of spectral cytometers, but will be subject to a separate blog.

1. Non-pensation

Non-pensation is the concept of not compensating the data. Instead, non-pensation takes advantage of 3 factors: a wide dynamic range of a detector, a fixed voltage PMT, and a visualization tool that allows the very accurate drawing of gates.

This process was demonstrated on the Accuri™ flow cytometer (now part of BD Bioscience), a fixed voltage flow cytometer with 4 fluorescent detectors, and a wide dynamic range. Additionally, the software has a magnifying tool, allowing very precise positioning of gates.

Data was collected from multiple different sample types, from multiple cytometers, all of which were set up and locked down the same way. The results are shown in Figure 3.

Figure 3: Uncompensated FITC and PE data from multiple samples and multiple machines showing the location of the single and double stained gates for this system. Data courtesy of Claire Rodgers.

As can be seen from this data, the location of single stained fluorochromes is consistent and predictable for instruments with the characteristics of these systems. This data is from the early 2010s, so the specifics of the newer instruments may have changed, but the principle is the same.

This method is limited in implementation, and ultimately not recommended.

2. Manual Compensation

During the early years of multicolor flow cytometry, where most researchers were using 2-4 fluorochromes, manual compensation was the vogue method to correct for the spillover of a fluorochrome into secondary detectors.

The process started by placing the unstained cells in a box in the lower left corner of the plot (the first decade) and placing a quadrant on this plot. Single color controls were run, and a slider bar (or equivalent) was used to adjust the compensation. This is shown in Figure 4, which was taken from the FACSCalibur Users Manual.

Figure 4: Manual compensation. By adjusting the percentages, the positive sample was adjusted to below the quadrants.

This technique — “Cowboy Compensation”, coined by Joel Sederstrom (he’s the one I first heard using this term) — is illustrated with data generated in FCS3.0 in the figure below.

Figure 5: Cowboy Compensation of the signal in the primary detector (B530/30). The black line represents the ~median of the negative population.

Following this method, samples are often overcompensated. This is especially true of data from FCS 2.0 files, where the ability to visualize the full range of the spread was compromised by the hardware limitations. This is shown in Figure 6.

Figure 6: Comparison of uncompensated flow cytometry data in FCS2 or FCS3. The dotted box represents the approximate area that was magnified on the right.

As can be seen on the axis of the FCS2 data, there is a substantial amount of data piled in the first bin, which is showing up as red along the axes. This is not an issue with FCS 3 data, which allows for the visualization of the full range of the data.

One can use statistics to help improve the process, to an extent. In this case, a gate is placed around the positive and negative populations. As the compensation value is increased, the goal is to match the median (or geometric mean, depending on what is available in the software) of the positive gate in the secondary channel to the median of the negative gate in the secondary channel. This looks something like what is shown in Figure 7.

Figure 7: Using Statistics to optimize compensation. The median values of the positive and negative gate in the secondary channel (B-585/42) are shown in the table below.

The real compensation value is somewhere between 20 and 25% compensation. Plotting a graph with median on the Y axis and the compensation value on the X axis, as shown below, will allow the calculation of the 2 lines. The intercept point of these 2 lines is where the median value is equal. With a little bit of algebra, the equations can be set equal to each other, and the compensation value where MFIs are matched can be solved for. In this case, that value is 24.445%

Figure 8: Using matched medians and algebra to determine the compensation value of FITC into the PE channel.

That is a lot of work. Cowboy Compensation is not recommended at all. In fact, with more than 3 fluorochromes, manual compensation becomes difficult and should not be considered. While there are those who will swear by manual compensation, it is highly error prone, and should also be avoided. That leaves the last method of compensation, which is recommended.

3. Automated Compensation

Taking a step back, if one were to consider the process of signal detection, for a hypothetical 3-color experiment, it could be diagrammed as shown in Figure 9.

Figure 9: Origins of the fluorescent signal as measured by the detector system.

The left side of this box represents what we are trying to determine: the actual amount of fluorescence on the target. On the right side, is what is measured by the detector. The lines represent the photons of light from each fluorochrome. Taking the FITC signal to start, it is made up of the actual amount of FITC label on the cells, plus the actual amount of PE on the cells times a constant M21, that represents the detection “efficiency” of PE by the FITC detector and the amount of Cy5-PE fluorescent times a second constant M31, that is the amount of Cy5-PE captured by the FITC detector. Mathematically, this can be represented as:

FITCobs = FITCact+M21*Peact+M31*Cy5-PEact

By using the appropriate single stained controls and following the rules of compensation, these values can be determined. It boils down to a matrix algebra problem.

Figure 10: Matrix algebra representation of the compensation process.

Fortunately, the researcher doesn’t have to solve this problem, as automated compensation is available on most acquisition and analytical software. The rules of how to get the best controls will be discussed in the next blog in this series.

3 different theories on compensation have been discussed. The first, non-pensaton, is not recommended, and only possible under a narrowly defined instrument. The second, manual compensation, is also not recommended for anything more than 2 fluorochromes. It is error prone and subject to the researcher’s judgement, unless statistics are invoked. For polychromatic flow cytometry, best practices in flow cytometry is to use the automated compensation methodologies. This will ensure consistent and accurate compensation, if some rules are followed. The next article in this series will discuss what these rules are and how they apply to compensation.

To learn more about the Best Practices In Flow Cytometry Compensation Methodologies, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Reproducibility In Flow Cytometry Requires Correct Compensation

$
0
0

Why do we have to compensate flow cytometry data?

Newcomers to flow cytometry are often confronted with one of the most confounding issues in flow cytometry. That is, trying to understand the whole idea of “compensation”. It can be explained theoretically, mathematically, by trial and error, or by “take my word for it”. Depending on the audience, a combination of these are used to get the point across.

Simply put, compensation is the mathematical process of correcting the spectral spillover of a fluorochrome into a secondary detector. It relates to the physics of fluorescence. To understand what this means, let’s start with the Jablonski diagram of fluorescence.

Figure 1: Jablonski diagram of fluorescence. Used user creative commons license. Original.

A fluorescent molecule starts at rest, with electrons in the ground state. When a photon of light hits this molecule, it is absorbed (purple line), promoting an electron to a higher energy state. There are a variety of ways that the energy release can happen — we are specifically interested in fluorescence. When a molecule fluoresces, it releases a photon of light of lower energy and higher wavelength than the photon, that excites the molecule.

That emitted photon is of lower energy, and therefore, a higher wavelength than the exciting photon. This process can be modeled, and using one of various spectral viewers available on the Internet, it is easy to see. Such a model is shown below for AlexaFluor™488 (AF488).

Figure 2: The excitation (dotted line) and emission (solid) for AlexaFluor™ 488.

In an ideal world, the fluorescence emission would be just a single wavelength, but this is not an ideal world. The maximal emission is about 520 nm, but the tail of the emission extends out past 600 nm.

This becomes a problem for flow cytometry when the emission spectrums of different fluorophores overlap into different detectors. Shown in Figure 3 superimposed on the AF488 emission spectra are two bandpass filters: a 530/30 nm (blue) and a 585/42 (green). Looking at the green filter, it is clear a percentage of the AF488 curve can be measured in that wavelength range.

Figure 3: AlexaFluor™ 488 emission profile showing overlap into a second filter range.

Compensation addresses this spectral spillover and is the focus of the next 3 blog posts. To start, let’s establish some ground rules, the “3 Rules of Compensation”, which first appeared here. Understanding these core concepts is essential to understand how to properly compensate and refute some of the myths of compensation.

1. The control should be at least as bright as the sample.

The first rule is that controls need to be at least as bright as any sample you will apply compensation to.

Mathematically, compensation is calculating the slope of the line between the single stained positive and the unstained negative. As shown in Figure 4, if dim particles are used to calculate the slope, the value is 23.58. When bright particles are used, the slope is 24.55. The difference is explained by the fact that there is a greater error in the dim particles than there is in the bright particles (note the size of the error bars). Since this is on a log-scale, it is further magnified as the error of the measurement varies with the square root of the absolute value of the measurement.

Figure 4: Demonstration of the first rule of compensation. Control must be at least as bright as the sample it is applied to.

Baked into the first rule are the following caveats: that is, that the signals need to be on scale and within the linear region of the PMT detector. When the signal violates either of these 2 conditions, accurate compensation is impossible. This is shown in Figure 5, where the signals in the yellow shaded regions violate these caveats, and will result in incorrect compensation.

Figure 5: Demonstration of the linear scale of a detector.

When this rule is violated, the consequences can be profound, as shown in the figure below. On the left, is the compensation control used to set compensation. This has been compensated. On the right, is the fully stained sample. As can be seen from the black line, representing the median for the compensation control, the positive sample is above this line, and based on the red line, undercompensated.

Figure 6. Consequences of violation of the 1st rule.

2. The background of the positive and negative carriers must be matched.

The second rule is that the background fluorescence should be the same for the positive and negative control population for any given parameter. Since the goal, as discussed above, is to calculate the slope of the line between the positive sample and the negative — for these to be comparable, the backgrounds must be the same.

In a future article, the advantages of cells or beads as a carrier will be discussed. For the purposes of the second rule, this means that if beads are used as a positive control, the correct comparison is an unstained bead. Likewise with cells. However, you cannot use cells as a negative control and beads as a positive. This is what the Universal Negative forces many users to do, and why it should be avoided.

Shown in Figure 7, are the autofluorescence of unstained beads and cells. Notice the difference in background between the two.

Figure 7: Autofluorescence of unstained cells (red) and beads (blue), along with positively stained beads (green).

The consequences of this can be seen in Figure 8. Here, the blue lines connect the ~medians of the unstained cells with the positive bead, and the red lines connect the ~medians of the unstained beads with the positive bead.

Figure 8: Consequences of violating the second rule.

In a compensation matrix, you can have some fluorochromes compensated with cells and others with beads, as long as there is the appropriate negative control for each sample. To put it another way, no matter if you are using cells or beads, just make sure you use the matched controls, i.e. matched positive and negative controls.

3. Compensation color must match experimental color exactly.

The third rule of compensation is that the compensation color must be matched to your experimental color. This means matched fluorochrome, matched sensitivity, and matched treatment. Shown in Figure 9 is the emission spectra of 3 different “green dyes”. All are typically read of blue excitation and a 530/30ish band pass filter. Is it possible to use one to compensate for the other?

Figure 9: Emission spectra of GFP, Brilliant Blue (BB)515 and Fluorescein (FITC). The emission max are indicated by the lines.

The answer is “no”! Each of these have different spectra, so you cannot substitute one for the other. This concept extends to tandem dyes. These manufactured dyes are comprised of 2 fluorochromes, a donor and an acceptor. The donor dye is excited, and rather than emit a photon, will excite the acceptor dye (with caveats), and the emission of the acceptor dye is what is measured by the flow cytometer. Some tandem dyes are easy to spot, as they have the names of 2 fluorochromes, such as: Cy5-PE, Cy7-APC, Cy5.5-PerCP. Others are not so easy to spot, including BV570, BV605, or BB700. Figure 10 shows 2 different lots of the same tandem dye.

Figure 10: Spectral overlap of 2 different lots of the same tandem dye (Cy7-PE).

If Lot 1 was used in the experiment and Lot 2 used for compensation, it is clear there would be compensation errors, resulting in incorrect data interpretation. So, make sure the dye lots are matched.

4. Collect enough events.

This is a bonus rule for everyone, an extension of the original rules. There is software that is present to collect some number of events for compensation. Don’t rely on that value.

For your flow cytometry data to be valid, you must collect enough events. The more events, the better. In the case of a bead carrier, at least 10,000 single beads. With cells, a minimum of 30,000 events — more, if the positive samples in question are rare. A good rule of thumb is a minimum of 5,000 positive events.

Figure 11: Effects of collecting more events on the compensation value.

Here, as the number of single events increases, the compensation value approaches the final value of 22.51. This is because with fewer events, the accuracy of the measure is decreased. Data storage is relatively inexpensive, as are beads, so don’t shirk from collecting enough events.

Understanding the 3 rules of compensation, and applying them to your everyday workflows is an essential step in good, consistent, and reproducible flow cytometry data. Making sure the controls are bright, and treated the same way is essential. Don’t bring unfixed controls when your samples are fixed, as the controls will not reflect the spectra from the fixed samples. Make sure not to rely on the “Universal Negative”, use a single sample to set background, and collect enough events to make sure an accurate measurement is made to further improve the quality of your control and therefore the data.

To learn more about Reproducibility In Flow Cytometry Requires Correct Compensation, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

The Truth About Flow Cytometry Measurement Compensation

$
0
0

In most research labs, there exists a notebook that contains the tried and true protocols for lab members to follow. These hallowed, often coffee-stained, pages teach the researchers everything — from how to make media, passage cells, and run restriction digestions, to how to prepare cells for flow cytometry analysis. These protocols are time-honored and tested, so the new researcher doesn’t question the wisdom of the “Protocols Book”.

Unfortunately, these pages are not refreshed with the best practices that have evolved over time as the technology and our understanding has changed and grown. The “truths” in this book are not always right anymore, but the new user doesn’t necessarily know any differently. It is for this reason that there are suboptimal practices that permeate flow cytometry experiments to this day. The last 2 blog articles have discussed the theory and practice of compensation. This blog article will help shine light on some of these historical practices and why they need to be changed.

You can use a universal negative

The idea behind the Universal Negative is that a single tube, typically unstained cells, is used to set the negative population for establishing the compensation matrix. This was the default method when performing manual compensation.

As discussed previously, the Universal Negative violates the 2nd rule of Compensation, which states the positive and negative carrier must have the same background.

A lot of the automated analysis packages on flow cytometry software, both acquisition and analysis, offer the ability to identify a single sample that is supposed to be representative of the background fluorescence of the population. As shown in the figure below, it is clear that a single, unstained sample cannot be used to properly set the background.

Figure 1: Unstained cells (red) and beads (blue) have different background fluorescence.

For those using BD DIVA software to acquire samples when setting up compensation, make sure to uncheck “Include separate unstained control tube/well”. After acquiring each compensation control, after gating on the FCSxSSC plot (P1), the histogram in question will have a P2 gate that is placed around the positive population. The user can draw a P3 gate around the negative population. The software will now use the P3 gate to calculate compensation.

Compensation values cannot be over a certain percentage.

Every now and then, there is a suggestion that compensation must be no greater than some value, usually around the 40 to 50% range. It’s important to remember that compensation is the result of a mathematical correction based on the appropriate controls, as described earlier in this series.

This is often followed by the idea that, rather than have the compensation value too high, researchers should adjust the voltage to reduce the compensation value. As shown in Figure 2, while changing the voltage does impact the compensation value, it does not impact the spread of the data.

Figure 2: Spreading error is independent of compensation value. PE and PE-Cy5 were collected over a range of voltages for the PE-Cy5 detector while holding the PE detector voltage constant. Compensation values for each voltage were calculated in FlowJo, yielding values ranging from 2.7% up to 2,900%. Importantly, the spread of the PE-Cy5 beads in the PE channel, as indicated by the dashed line, is unchanged. This data shows that a high compensation value is not indicative of severe spillover spreading. Data courtesy of the University of Wisconsin Carbone Cancer Center Flow Cytometry Lab.

The best practice to properly set voltages is during panel optimization, to optimize the voltage by performing a voltage walk (also known as a voltration). In this process, properly titered antibodies are used to stain cells, which are run at increasing voltages. The Staining Index is calculated and the point where the best separation for the antibody is identified.

Figure 3: Voltage walk (Voltration) with 2 different antibodies. On the left, the optimal voltage (in green) is the same as determined by the peak 2 method. On the right, increasing the voltage increases the SI by approximately 15%.

Compensation introduces error into your dataset.

Error is present in all scientific measurements. This comes from various sources, from pipetting error to photon counting. This error ends up in the data, leading to the spread of the data that is observed in flow cytometry plots.

One thing that worries researchers when they compensate is that there is a large error being introduced into the dataset. This is simply not true and is the result of how the data is displayed and the log scale, as illustrated in Figure 4.

Figure 4: Moving from the high end to the low end of the log scale impacts the perception of the data.

The 5% and 95% were determined in the same channel (red arrow), which allows the spread of the data to be determined (blue arrows). When properly compensated, the error (872 units) must be maintained. However, the data is now shifted to the low end of the log scale (right plot).

This is why new visualization methods are needed, to help see the full spread of the data. As shown in Figure 5, there is a large amount of data on the axis (red circle). To properly visualize that, and the spread of the data, a transformation has been applied, in this case the Bi-Exponential transformation. This allows for the full spread of the data to be visualized, and proper gating to be established.

Figure 5: Biexponential transformation to properly visualize the spread of the data.

You can reuse your compensation matrix.

There are those days when everything goes wrong. The experiment is salvaged, but the controls are lost. “No problem,” thinks the researcher, “the matrix from last week should be find, right?”

Wrong! The idea of reusing the matrix from a previous experiment is one that people cling to, but is not good science.

For compensation to be accurate, the third rule states that it must be an identical fluorochrome and identical settings. Using a matrix from last week (or even yesterday) can easily violate that rule. Tandems degrade and instruments can vary. What if the person before ran a dye that sticks, and compromises your data? What if the instrument had a major alignment or repair?

Bottom line, with the relative low cost of capture beads, and the fact that you don’t need to use the same concentration of antibody as on your samples, there is no excuse to reuse a matrix or ignore this critical control.

The topic of compensation is a critical one for the cytometrist to understand. It requires adherence to some specific rules, an understanding of how the instrument works, and how fluorescence occurs. Poor or incorrect compensation can easily lead to incorrect conclusions, and decreases the reliability and robustness of the data generated.

Understanding compensation, and being armed with the knowledge, allows the researcher to combat those fairytales that continue to make their rounds in science. It is time to put them to bed and move forward with a full understanding of the process.

To learn more of The Truth About Flow Cytometry Measurement Compensation, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Why Cell Cycle Analysis Details Are Critical In Flow Cytometry

$
0
0

The lifecycle of a cell can be described in stages. In diploid cells, much of the time they exist in a resting state, where a cell does what a cell does — such as, undergo differentiation. In some cases, the cells go into a quiescent state, where the level of RNA is reduced. When the appropriate signals are received, cells begin to bulk up and start to replicate the DNA in preparation for division into 2 daughter cells. After the synthesis phase, the cells enter a second period of rest, where everything is checked before the cells undergo mitosis and produce 2 daughter cells. The cycle repeats itself until the cells die. The cell cycle is usually depicted as shown in Figure 1.

Figure 1: The Cell Cycle. Image from Wikipedia.

While there are many differences in cells at each stage of the cell cycle, one of the most obvious is the amount of DNA that the cell contains. At the G0 and G1 phase, the cells have a normal amount of DNA (2N for a diploid cell). Upon entering the S phase, the DNA concentration begins to increase until it doubles (4N) and the cells reach the second gap (G2) phage. The cells eventually undergo mitosis (M), producing 2 daughter cells with 2N DNA content.

Diseases including cancer, Alzheimer’s, Parkinson’s, and more, are all caused at some level by cell cycle dysregulation. Cells, such as the Megakaryocyte, undergo endoreduplication as part of the normal development. In plants, polyploidy is common. The durum wheat used to make pasta is a tetraploid wheat, while a hexaploid wheat makes your bread lighter. Thus, cell cycle analysis remains an important tool in the researcher’s toolbox.

Cell cycle analysis was one of the first clinically robust flow cytometry assays, where it was used to examine the DNA content of tumors to gauge the aggressiveness of the cancer. In fact, Shankey and colleagues published guidelines on how to implement DNA analysis in the clinic.

Cell cycle analysis appears to be a deceptively simple assay, as the base assay only requires 1 fluorochrome. However, there are many steps necessary to optimize to get high-quality cell cycle histograms.

Basic cell cycle protocol

Below are 2 basic protocols that use Ethanol for fixation. One uses PI, and the other, DAPI. The choice of dye will be discussed below.

Ethanol fixation is preferred for cell cycle analysis, and while aldehydes can be used, the crosslinking nature of these chemicals can impair the DNA staining when using intercalating dyes, resulting in a less accurate measurement (Darzynkiewicz, et al. 2017).

Shown here, is a basic protocol for staining for cell cycle only, with the most commonly used DNA dyes, propidium iodide, and DAPI.

The main difference between these 2 dyes is that when using propidium iodide, RNase needs to be added as well.

No matter which dye you are using, take about one million cells and fix them with ice-cold 70% ethanol. It is critically important to add the cells to the ethanol in a dropwise fashion. Have the tube on a vortex, moving at a reasonable speed — not slow, but not resuspend-my-DNA fast. Drop the cells into the center of the vortex and wait until the cells are fully mixed with the ethanol before adding the next drop. It takes practice, but if the cells go into the ethanol too fast, you will end up with goop.

Once you have fixed the cells, they can be stored in the fridge for a few weeks. There may be some signal degradation after a week or two, but it’s very much cell type and cell line dependent. So, when planning to store cells, make sure you do a test first.

After the cells are fixed, they can be stained and analyzed.

First, centrifuge the fixed cells — 10 minutes at 300 g is a good place to start. Ethanol-fixed cells don’t pellet like normal cells — the ring may be more diffuse than what is seen with non-fixed cells.

Next, resuspend the cells in PBS and allow the cells to rehydrate for between 30 seconds to 2 minutes. Spin the cells down again and resuspend in a staining buffer.

When making the staining buffer, for propidium iodide, use approximately 50 u/mL propidium iodide and 100 ug/mL RNase A. Make sure the RNase A is DNase-free. If you cannot find DNase-free RNase A, heat up the RNase A to deactivate the DNase. The RNase A will survive the heat — it’s one of the most stable proteins on the face of the Earth.

Wait at least 30 minutes at room temperature for the RNase A to work, and the propidium iodide to bind.

If you are using DAPI instead of PI, after you rehydrate the cells in PBS and spin them down a second time, you can resuspend the cells in a flow buffer containing 1 mg/mL DAPI.

Wait at least 30 minutes before analyzing on the cytometer to give the DAPI time to bind the DNA. 30 minutes is just a starting point, and depending on your cell line, you might need to change this time to get optimal staining.

Choosing a DNA binding reagent.

There are many reagents that will bind DNA, and they tend to bind in a couple of different ways. The dye can bind to the major or minor groove, may be AT or GC preferential, or may intercalate into the DNA strand itself. This is summarized in Figure 2.

Figure 2: Different binding characteristics of DNA binding dyes (From Thermofisher).

With intercalators, the dye actually wedges in between the base pairs in the DNA helix. Propidium iodide, 7-AAD, and DRAQ5 are 3 common intercalators.

Some dyes are bis-intercalators. Most of the bis-intercalators have a doubled-up name. There’s TOTO and YOYO and POPO.

Methyl green is an example of a major groove binder, but this isn’t commonly used in flow cytometry.

All of these dyes bind stoichiometrically, which means that they will bind at specific ratios to the DNA.

These dyes also come in all regions of the spectrum. So, depending on the instrument configuration and the needs for downstream assays or multiplexing, there are dyes that will bind to DNA and might give an acceptable cell cycle analysis pretty much anywhere in the spectrum.

Shown here are some of the most common dyes used for cell cycle analysis.

DAPI and the Hoechst dyes prefer a UV excitation source, but they can sometimes run off of a violet laser. So, if you don’t have a UV laser, do a little test run and see if there is good enough resolution using the violet laser for them.

DAPI is only slightly cell permeable, so the cells need to be fixed when using DAPI so it can access the DNA.

There are two Hoechst dyes, there’s 33342 and 33258. Hoechst 33342 is cell permeable, and so is a great dye to choose for supravital cell cycle. Meanwhile, Hoechst 33258 is only a little bit cell permeable and would be used on fixed cells.

Propidium iodide is not cell permeable, and requires cell fixation. Additionally, 7-AAD is not cell permeable either.

DRAQ5 is cell permeable but, word of warning, it is known to be cytotoxic. So, using DRAQ5 on live cells is not recommended if those cells need to live afterwards. DRAQ7 is an interesting DNA dye because it has an extremely far red emission peak, so it is an option to get a cell cycle analysis from way out in the panel.

Variables to consider when optimizing your protocol.

DNA staining protocols will need to be optimized based on the cell type, dye, or other situational factors, to ensure the best results and there are a few key variables to consider when optimizing.

First, consider time. This is especially critical when staining viable cells.

Second, consider the concentration of dye. The dye-to-cell ratio is very important when doing cell cycle by flow cytometry. If there is not enough dye, the DNA will not be saturated and the peak CVs become large. The ratio between the G1 and G2 peaks may not be quite what it should be.

So, count the cells you are using and make sure that you’re using the appropriate amount of dye for that number of cells. .

Third, consider the temperature. For live cells, the temperature they are stained at also makes a difference. For example, in the following figure, the difference between staining with Hoechst 33342 at room temperature versus 37℃ is shown. When the cells were stained at 37℃ the peaks were much better.

One additional thing to know about Hoechst 33342 is that cells that have drug transporters will be able to kick the dye back out. So, make sure there is enough dye around to keep the DNA saturated, even while the cells are pumping some of it back out.

Fourth, consider how you will fix your cells. The best cell cycles are usually done with ethanol or a similar precipitating fixative. Formaldehyde is really good for fixing proteins and keeping things around, but it can cross-link the DNA and the chromatin, and it restricts access for DNA dyes. Also, consider whether you will use detergent as a part of your fixing protocol.

Consequences of using fixatives or detergents.

In the example shown above, one well of cells is treated exactly the same, and then split into 2 groups and fixed 2 different ways (with ethanol or formaldehyde) to demonstrate the impact of fixing on outcome.

When these assays were put into ModFit to model them, the CVs are much lower on the ethanol-fixed cells than they are in the formaldehyde-fixed cells. 6% isn’t awful, but it is definitely not as good as just under 5%. And, you can see differences in the peak shoots.

If ethanol fixation is compatible with your assay, this is where you should start.

Next, to determine if you want to use detergent or not, consider the type of analysis you want to do. Do you want to analyze isolated nuclei, or do you want to keep the whole cell around?

If you do an ethanol fix with no detergent, the outer membrane is going be a little bit permeabilized. It will be permeable enough to let dyes in, but it’s not going be enough to let RNAs out. So, with ethanol fixation, most of the cytoplasmic contents will remain.

However, if you fix with detergent, it’s going to dissolve the cells’ outer membrane. The RNAs and other cytoplasmic confounders are going to dissociate, and you will be left with a more or less bare nucleus. Fixing with detergent does, however, get rid of the need for RNase.

But, the biggest problem with using detergent is that during mitosis, there’s the state when the nuclear envelope is actually broken down so that the sister chromatids can get pulled apart, and they go into their new cells. If you expose these cells to detergent and they are in mitosis, the chromosomes and the DNA leak out. So, when using detergent, it’s possible to underestimate the number of cells in mitosis.

Importance of titrating your dyes.

There is a lot of talk about the importance of titration with antibodies. But, it’s not talked about as much concerning DNA dyes. But, titrating your DNA dyes is important.

When doing cell cycle with propidium iodide, it might seem okay to think, “It looks pink, it’s probably fine.” But, this does not allow you to tell if the dye is actually saturated or not.

This becomes more important when working with DAPI, because DAPI is not very water soluble. If you are using too much or too little dye, your peaks will not look as good as they could.

In the image above, you can see how much nicer the correct concentration of DAPI looks. It’s a much tighter peak and the background is a lot lower. When there is a lot of DAPI, there is a lot of junk hanging around. But, if there is not enough DAPI, like down here at 0.1 mg/mL, there is poor separation.

Again, the amount of dye that you need to use is probably going to be cell type dependent. So, do a quick titration. It doesn’t take too long, and it will save you time and frustration in the long-run.

These are the basics of cell cycle analysis. In upcoming blog posts, we will discuss more advanced techniques and data analysis to ensure that your cell cycle experiments are consistent, reproducible, and informative.

Cell cycle analysis is deceptively easy in concept, but details are absolutely critical. It is not possible to hide the data if there is poor sample preparation, incorrect dye ratios, too much (or too little) staining time, etc. Forgetting RNAse when using PI will doom your data to failure. Take these basics into account as you move into performing this simple, yet complex assay.

To learn more about Why Cell Cycle Analysis Details Are Critical In Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

6 Areas Of Consideration For Flow Cytometry Cell Cycle Analysis

$
0
0

As discussed previously, cell cycle assays require optimization of fixation and dye concentrations, but that is just the beginning. There are important considerations when performing the assay to ensure high-quality data. Cell cycle experiments are judged by the CV of the G0/G1 peak, and the best way to get a good peak is to run the experiment as slow as possible. Likewise, since the cell cycle assay is run with linear amplification, the PMTs must be monitored and their linearity measured.

Even with those 2 aspects on the machine mastered, there are additional details (like synchronizing the cell culture) that need to be considered. Even more so is the fact that the cell cycle assay lends itself to multiplexing, allowing for more information to be extracted from each sample. Those add-ons to the basic protocol need to explored and optimized as well.

Thus, here are 6 areas of consideration for cell cycle analysis covering these important topics.

1. Run cell cycle analysis low and slow

Acquisition of cell cycle data is not like phenotyping. First, data is acquired with linear amplification, rather than logarithmic amplification. Unlike the expression of surface markers (where there can be a wide range of expression), with cell cycle data, the range of expression is much more restricted. If the G0/G1 peak is set at 10,000, the G2/M peak would be around 20,000. Second, the quality of the data is assessed by the spread of the G0/G1 peak, so running the samples at a low flow rate is critical. The consequences of high flow rates are shown in Figure 1.

Figure 1: Impact of flow speed on data quality. The blue peaks show the results of data acquired at low flow rate. The red peaks show data acquired at high flow rate. At high flow rate, the peaks broaden and shift, both of which compromise the quality of the data.

2. Check PMT linearity

Another important component of acquiring cell cycle data is that the detectors, the PMTs, need to be linear. One way to check this is to use a standard, like Chicken Erythrocyte Nuclei (CEN), as shown in Figure 2. After setting the voltage, gate on each of the populations and take the mode value (most common value in the gate). The ratio of the peaks 1 to 2, or 1 to 3, should be 2 or 3. The closer this is to linear, the better the quality of data.

Figure 2: CEN stained with 50 μl/ml PI staining buffer containing RNase.

In an ideal world, the cell cycle data would look like the theoretical plot shown in Figure 3, left plot. In this case, the G0/G1 is a peak in a single channel, and the G2/M is a peak in a single channel with twice the fluorescence intensity of the G0/G1. The line connecting the 2 peaks represents the S-phase. The red line in the left plot represents a common control used in cell cycle analysis — in this case, trout nuclei.

The reality of the data is shown on the right, and the spread of the data comes from the sample preparation and instrument.

Figure 3: Ideal cell cycle data (left) and the reality (right).

3. Controls are critical

One very useful control to add to cell cycle analysis is a DNA standard control. The most common controls are either chicken or trout nuclei (CEN and TEN). Using one or both of these controls can help consistently establish proper voltages, as well as help sort out how different populations may be changing. In this example from Darzynkiewicz et al., (2017) (Figure 4), the breast cancer cells were stained and CEN and TEN were used to help establish the position of the diploid (D) and aneuploid (A) cells.

Figure 4: Use of CEN and TEN as a control for cell cycle analysis. From Darzynkiewicz et al., (2017).

These controls are especially important when studying aneuploid cells, and for longitudinal studies. CEN, for example, have a DNA content that is 35% that of human DNA, while TEN have 80% of human DNA, which makes them ideal controls (Vindelov et al. 1983). By staining these 2 controls, establishing voltages and calculating ploidy is a breeze.

4. The Sub-G0 peak is not just apoptotic cells

As cells die and undergo apoptosis, DNA fragmentation occurs. This can manifest itself in the cell cycle histograms as a population to the left of the G1 peak. Compare these 2 figures from Darzynkeicwicz et al., (2010):

Figure 5: Appearance of sub-G1 peaks on cell cycle profiles. From Figure 2 Darzynkeicwicz et al., (2010).

The inclination of some investigators is to put a region around this area and call that the percentage of apoptotic cells. However, what is really in that population? If we use plates as an analogy for cells, the issues with subG1 analysis become clear. In Figure 6, we have a hypothetical DNA histogram with a G1 peak and sub-G1 peak. The plates represent whole cells and are in the G1 phase. Now, the sub-G1 peak represents cells undergoing apoptosis, or being “broken”. Based on the picture above, how many plates are broken?

Figure 6: Why the sub-G1 peak can’t be used to calculate apoptosis. Thanks to Mark Munson for this explanation.

Since the composition of the Sub-G1 peak is a mixture of many cell fragments, it is not a useful marker for determining the percentage of apoptotic cells. It can give you a suggestion that apoptosis is occurring, but use a more robust method like the Annexin-V assay to measure apoptosis.

5. Synchronizing cells when performing drug studies

When performing a drug study to explore the effects of your drug of choice on cell cycle progression, synchronizing of the culture is critical. Since cells are all going through cell cycle at different rates, if the culture is not synchronized, it will be very difficult to determine the impact of the drug unless the cells are synchronized.

There are several popular techniques for synchronizing cells at a specific stage of the cell cycle, which are reviewed by Jackman and O’Connor and are summarized in Figure 7.

Figure 7: Methods for synchronizing cells at specific stages of the cell cycle. From Figure 8.3.1 in Jackman and O’Connor.

Serum starvation is a popular technique for synchronizing a culture. By removing serum from an actively growing culture, within 24 to 48 hours, you can get the cells to stop at the G0/G1 phase. When serum is added, the cells will resume the cell cycle.

Nocodazole disrupts microtubule, so prevents cells from undergoing Mitosis. It is easily reversible, and allows for isolation of cells in M-phase in about 12 to 16 hours.

6. Consider multiplexing for more information

If you are going to the trouble of performing a cell cycle analysis, why not expand the assay a bit and get that much more information about the cells?

  1. BrDU for S-phase

    BrDU and the related EdU are thymidine analogs that can be used to better identify the S-phase. The assay is relatively straightforward: by pulsing the culture for some period of time with BrDU, it will get incorporated into the actively synthesizing cells. The BrDU is measured with an antibody, while EdU uses “Click-iT” technology to reveal the labeled cells. The resulting data is shown in Figure 8.

    Figure 8: Using BrDU to reveal the S-phase of the cell cycle. Figure from Tonbo Literature

    The BrDU pulls out the S-phase cells, making it easier to identify this phase of the cell cycle.

  2. Ki-67 for proliferation

    Antibodies against Ki-67 are useful, as they are a marker of proliferation and can be used to differentiate quiescent cells from cycling cells.

    Figure 9: Separating quiescent cells from cycling cells. From Kim and Sederstrom (2015).

  3. Phospho-H3 for M phase determination

    Histone H3 is phosphorylated during mitosis at the Ser-10 and Ser-28, thus serves as an excellent tool to separate the G2 from the M phases, as shown in Figure 10.

    Figure 10: Use of pH3 to separate M phase from G2 phase. Figure from ThermoFisher website

    Consider the possibilities. Using these tools in combination with traditional cell cycle staining can really dissect this important biological process. The caveat when moving into multiplexing cell cycle measurements is that the fixation conditions need to be optimized to ensure good staining of each component. For example, fixation with formaldehyde may be necessary to ensure good antibody staining, but this might need to be followed up with alcohol fixation to improve the cell cycle staining. Optimization is the key here.

Cell cycle seems like such a straightforward assay. At its heart, it is a one-color assay and should be a simple protocol to follow. However, as discussed before, fixation and dye concentrations are critical. Once those are optimized, it becomes important to run the cells low and slow to get the best quality histograms for analysis — the topic of another blog. Adding the critical CEN and TEN controls will help standardize the assay, and ensure consistency and reproducibility between runs, while helping identify non-standard (aneuploid, polyploid) populations from normal ploidy. Trying to isolate and focus on specific components of the cell cycle can be done by addition of specific antibodies, or using thymidine analogs. In the end, cell cycle analysis is a simple assay that has a great deal of potential. With work and optimization, a great deal of information about the life of a cell can be extracted.

To learn more about the 6 Areas Of Consideration For Flow Cytometry Cell Cycle Analysis, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training


Best Flow Cytometry Cell Sorting Practices

$
0
0

Walk into any flow cytometry facility and you will see one or more cell sorters. These devices use the principles of flow cytometry to isolate phenotypically defined cells to a high degree of purity for any of a number of downstream applications. Even single cells can be isolated for cloning and single-cell genomics analysis, a very hot area of research these days.

This was not always the case. Prior to 1965, if a researcher wanted to isolate cells, their only choice was some form of gradient centrifugation, a bulk separation method. There were no real options for anything with more fine control.

That changed when Mack Fulwyler published this paper in which he described an instrument capable of measuring an object’s Coulter volume and isolating the cells based on this volume. The ingenious part of the system was the use of the technology that Richard Sweet had developed for the “ink jet oscillograph”. This first cell sorter is shown in Figure 1.

Figure 1: One of the first cell sorters built by Mack Fulwyler.

Fulwyler demonstrated, using a mixture of mouse and human erythrocytes, that these 2 populations could be separated, as shown in Figure 2 from that paper.

Figure 2: Results from Fulwyler’s paper, showing the separation of a mixture of cells.

This was revolutionary and led to the technology available today, where researchers can isolate very specific populations of cells, to have homogeneity from a heterogeneous population, and then do downstream applications with those cells.

Enter Len Herzenberg, who coined the term, FACS — or “fluorescent activated cell sorting” — who was one of the first biologists to see the power in this new invention. He took Fulwyler’s plans and, with the assistance of the engineers at Stanford’s Instrumentation Research Laboratory, added an arc lamp (1969) and later an argon laser (1972). The race was on as Becton Dickinson took these ideas and began marketing the first cell sorters.

Cell sorting is a central tool to many experiments, so it is only right to look at some strategies that can be used to get high-quality results from a sorting experiment. The focus of these strategies is on electrostatic cell sorters.

1. Find your ideal sample concentration.

The correct input sample concentration is critical to ensure that your cells are delivered in a consistent concentration to the sorter. Too many cells per second, and the purity and yield are compromised. Too few cells per second, and the sort can take a long time, potentially impacting the quality of the sorted cells.

The proper concentration is a combination of the cell type you are using and the nozzle size of the cytometer. The goal is to have a nozzle that is about 5 times the size of the cells being sorted. The larger the cell, the larger the nozzle, which in turn impacts the pressure of the sheath fluid as well as the frequency of droplet generation. Below, is a rough guide based on some common cell volumes based on data from this site.

Figure 3: Approximate cell diameters for different cell types and recommended nozzle size.

The next step is to understand the the cell sorter actually sorts droplets, not cells. These droplets are generated by a pizoelectric element that vibrates the stream at some frequency, and the smaller the nozzle, the higher the frequency. Based on Poisson statistics, we would like to have an event rate of approximately 1 cell per 4 droplets. This event rate gives us the greatest probability that there is not a cell in the leading or lagging droplet, while maintaining a reasonable sorting rate. Shown in Figure 4 are some recommended values for sorting rate and cell concentration. Of course, check with the sort team to ensure you know the values for your specific instrument.

Figure 4: Recommendations for event rate and cell concentration, based on nozzle size and droplet generation frequency.

If the sample is too concentrated, you’re going to increase the abort rates, which occur when a second cell enters the analysis window before the first cell is finished processing. In this case, the data for both cells is lost. Further, if you have too many events per second, the Poisson statistics become skewed, increasing the probability of both 2 cells in one droplet, or cells in the leading and lagging droplet. The result of this compromises your ability to sort and get a good purity and recovery.

Of course, this sort of data can also be used to estimate the time it will take to sort the sample. This is shown in Figure 5.

Figure 5: Time to sort 100,000 cells, based on population frequency and droplet frequency.

This data is from one facility, so ask your local facility if they can provide this information for you. It will help you plan your experiment in more detail and help determine how long a sort will take.

2. Use magnetic sorting to enrich a rare population.

What if, based on the calculations above, your sort is going to take too long? This usually happens with rare event sorts. For example, as Figure 5 shows, if 100,000 cells of a 0.1% frequency are needed, the sort could take from 2.5 to 11 hours, depending on cell size. If more cells are needed, it will take even longer. How can one reduce the sort time?

Enter magnetic cell sorting. With this technique, you label your cells with antibodies coupled to a magnetic bead, and expose the solution to a magnetic field. Cells with the magnetic bead are retained while the other cells remain in the supernatant. While magnetic separation was used for years, in this paper the MACS method was described. The MACS system uses a paramagnetic bead and a column for the separation. Other methods do not employ the same bead and can be done in solution. The choice is up to the investigator.

Magnetic separation is a great pre-enrichment step for cell sorting, and sometimes it may even be sufficient for what you need. For example, if you need CD4 cells, magnetic sorting may be all you need to do with your blood to isolate your CD4 cells.

But in general, it’s used for pre-enriching the sample.

Here’s an example: there are 100 million bone marrow cells, the population of interest is 0.1%, the sort speed is at 15 million cells an hour, so it’s going to take you 2 hours to sort that population.

However, if you perform some sort of pre-enrichment, such as using a magnetic storage particle bead that enriches the population, it’s possible to go from that larger population of 100 million cells down to an enriched population of about 10 million cells. Enriching the population also means that you will be sorting the population at a higher frequency than 0.1%.

Sorting 10 million cells on a machine that can sort 50 million cells an hour means it takes about 20 minutes, instead of 2 hours. The time saved can get even more significant depending on the frequency and how many cells you are starting with. The end result is that the cells are happier, as is the researcher.

3. Suspend cells in the right buffer to avoid cell clumps.

Flow cytometry is a single-cell technique, so having a good single-cell suspension is critical for good results when sorting. Additionally, since the nozzle is smaller than it is on a typical analyzer, clumps will lead to clogs, which slow down the sort and cause unnecessary grief.

It is critical to examine your cells under a microscope before sorting. See how good the single cell suspension is. Also, you should filter just before sorting to remove any clumps in the solution. Beyond that, remove aggregates from the sort gate with doublet discriminator. You can learn more about doublet discrimination in this blog.

The buffer can make all the difference in the quality of the single-cell suspension. If, for example, you are working with non-adherent cells, a buffer of PBS with 0.5% BSA is a good basic buffer. Adding 25 mM HEPES buffer (pH 7.0) is a good idea as well, as HEPES has better buffering properties at high pressure than PBS does. Finally, a smidge (1 mM) of EDTA is good to add, as it helps chelate divalent cations that are often required for the formation of cell aggregates.

If you are working with lymphocytes, you can often omit the EDTA. If you have a high percentage of dead cells, adding 10 units/ml of DNAse II is strongly recommended. This will help reduce clumping caused by free DNA.

Finally, adherent cells require additional efforts. This can include increasing the EDTA to 5 mM, but be careful as too much EDTA can be bad for cells. Likewise, if you are adding DNAse, it requires Mg++ to work and EDTA will reduce/inactivate the DNAse. Consider neutralizing trypsin used to remove the cells with BSA or trypsin inhibitors, rather than FBS.

On the other end of the system, the catch buffer needs to be considered. Your collection buffer needs to be designed for your cells to enhance viability.

It is not advisable to use 100% FBS as a collection buffer, for 2 reasons. First, it is a higher density than the cells so, when the cells start to hit it, they are not actually mixing in, so they’re not actually loading up the FBS. Second, the few cells that get in the FBS early on are exposed to 100% FBS, so they are getting huge doses of whatever else is in the FBS and this will affect the cells.

The most recommended collection buffer is your cell culture medium with 10% FBS or some other serum.

Don’t forget antibiotics, pen-strep or gentamycin, and antifungal agents — you don’t want to introduce any contamination.

4. Change your instrument settings when sorting small cells.

Small cells, bacteria cells, and things of similar size, test the limits of resolution of most cell sorting systems. Most systems have been designed for looking at mammalian cells, so if you’re looking at a 0.3-micron bacteria, it’s much harder to find in the background noise.

Beads can help to calibrate your instrument for the small particle detection, as can a forward-scattered PMT. Side scatter may be a better option here, especially if it is possible to use a lower wavelength (405 nm, for example), as this may also improve resolution.

Don’t forget to change your data collection to log scale. Better still, it’s best not to rely upon scatter at all. Fluorescence is a much better trigger and there are a variety of fluorescent dyes that will stain live bacteria which can help identify your bacterial population of interest.

You can also back gate on the fluorescence to see where the populations are in your other plots. You aren’t going to get the resolution discrimination that you would have on, say, a population of lymphocytes from PBMCs.

Finally, with small cells, core stream size is important, as is the event rate. Keep to the recommended sorting rates as described above.

5. Optimize your sample preparation and instrument when sorting large cells.

What about sorting big cells? Slow it down.

Based on the data above, you are going to have to use less concentrated samples with the larger nozzle.

Interestingly, when using the 130 micron nozzle, ear protection may be required as the drop drive frequency can enter into the range of human hearing, and it’s annoying at the minimum.

DNAse is also a must, especially with a larger sample, because they are more fragile and easier to break. Easy to break means more DNA coming out, and more DNA means more things getting stuck together.

The collection buffers for large cells also have to be heavily optimized to preserve the larger cells.

Now, what about really, really, really large cells? Such as this C. elegans that has been transfected with GFP?

Figure 6: C. elegans transfected with GFP tag.

You can see spots of GFP. If you wanted to sort this C. elegans from a C. elegans that didn’t have spots, what could you do?

This requires a special instrument, which is shown below. Made by Union Biometrica, the BioSorter Platform is perfect for very large cells and small organisms.

This system has a range from 10 to 1,500 microns, so this system will sort neurospheres, adipocytes, lipid bodies, and other things that are large fragile cells. It will sort C. elegans, and it will sort fly larvae.

Cell sorting by flow cytometry is a powerful method first developed by Mack Fulwyler. Today’s researchers have access to machines that can sort up to 6 populations simultaneously with many fluorochromes used to finely subset the cell populations. It is a central technology that serves as a gateway to additional techniques, from cell culture to generating xenograph models, to genomic analysis.

As a researcher, you want to achieve the best cell sorting possible. So, how can you achieve that? There are clear strategies you can use to achieve great cell sorting results, including finding your ideal sample concentration, using magnetic sorting to enrich your population, suspending cells in the right buffer to avoid cell clumps, changing your instrument settings when sorting small cells, and optimizing your sample preparation and instrument when sorting large cells. Happy sorting.

To learn more about Best Flow Cytometry Cell Sorting Practices, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

From Purity To Biosafety, Understanding The Cell Sorting Process

$
0
0

The Power Of Flow Cytometry In Cell Sorting

Often, cell sorting is described as trying to find a needle in a haystack. In 2008, David Brown wrote an article on cell sorting for the Washington Post. In that article, he wrote :

“Finding and sorting a few cells from a great mass of them is sometimes likened to finding a needle in a haystack. But it’s actually more watching a parade of 10,000 undertakers, and spotting the two who are wearing yellow ties, and the three who are wearing pink ties, and getting them out of the line, without making everyone else lose step.”

This really captures the essence of the sorting process. It is also a very scary process for the average researcher. After spending hours and hours preparing samples, they are often handed off to an operator who puts them into the cell sorter. With fingers crossed, the researcher hopes the cells are sorted correctly, and there are enough of them to perform the downstream experiments that are the real goal of the process.

Most of the time, things go right, but now and then, they fail, sometimes catastrophically. Cell sorting is a balancing act of optimizing the sample and system to get the best yield with the highest purity in the shortest time possible. So we turn our attention to how to win that balancing act.

A Word About Biosafety

Before continuing onto the meat of this discussion, it is important to take a moment to discuss biosafety. Cell sorters are designed to generate aerosols, and these are of the size to settle in the deep lung if inhaled. This is important because in 1990 there was a report of a laboratory-acquired infection due to aerosolization during centrifugation. While aerosolization of HIV in animal studies have not shown infection, this article by Jones and Brosseau from 2015 discusses this issue in more detail.

The International Society for the Advancement of Cytometry, which in many ways can be considered the industry standard, published their first comment on biosafety in 1997, reviewed it in ’99, and most recently revised it in 2014. Central to this biosafety of cell sorting is an assessment of the dangers. A good place to start, if you have not already, is to work with your institutional biosafety officer. Safety involves multiple levels of controls from engineering controls to personal protective equipment. Testing and validation is also a critical step. Your biosafety officer can help you with this process. Don’t shirk in this area.

The Sorting Balancing Act

Cell sorting is a combination of a numbers game (Recovery), quality of output (Purity) and speed. For any experiment, the end goal is going to be measured by these three characteristics, and as soon as one of these measures is more heavily favored, the other two must be compromised in some manner.

When designing a sorting experiment, start with the question of what will the cells be used for after sorting, and how many cells will you need for those experiments? That will set the minimum recovery that is needed. The second question is how pure do you need the cells? The requirements of the downstream assay will also dictate the purity needed.

The cell type being used will, in part, dictate the speed of sorting. Smaller cells can be sorted faster because a smaller nozzle can be used.

When you start a cell sort it’s important that you are aware of the downstream analysis and assays that you want to run. This will determine how you perform the sort and how you determine if your sort was successful or not.

Successful cell sorting involves balancing recovery, yield and speed. What do these three terms mean and what influences each of these factors?

  1. Speed is how fast your cells can be sorted. This is influenced by the size of the cell. The larger the cell the slower the rate that cells can be sorted. This is because larger cells require larger nozzles. Larger nozzles, in turn, require lower sheath pressures to run and lower sheath pressure influences the rate at which droplets can be generated. The lower the sheath pressure, the slower droplets can be generated. This finally leads into the Poisson statistics, which is a way to describe the arrival of a given number of events per unit time, with the caveat that each event arrives independently of the previous event. The relationship between nozzle size and droplet frequency are shown in Figure 1.
  2. Figure 1: Relationship between frequency and droplet generation. The larger the nozzle, the lower the frequency, which in turn impacts how many events per second one should sort at. The dashed lines represent the upper and lower limits for a given nozzle, based on the data in Arnold and Lannigan (2010) Curr. Protoc. Cytom.51:1.24.1-1.24.30.

  3. Recovery is how many target cells are recovered. This is generally defined as the number of particles of interest in the sorted sample divided by the number of particles of interest sorted based on the sort report. Several factors can influence recover. How well the sort streams are aimed into the collection tubes is one thing to consider, as is how the collection tubes are treated. Since the droplets containing the cells are charged, it is important to neutralize the charge of the catch tube. This can be done by coating the tubes with protein. The easiest way to do this is by adding your staining buffer with protein to the catch tubes and let them roll around for about 30 minutes before use. Additionally the sorting ‘envelope’ that is used can also impact the recovery, as will an inaccurate drop delay.
  4. Purity is a measure of how many target cells were sorted. This is defined as the number of target cells in the sorted sample divided by the total number of particles in the sorted sample. The goal of the sorting experiment is to get many target cells with as few other contaminating cells. These contaminating cells can come from poorly sort drop delay, poorly resolved doublets, dead cells and poorly defined gating strategies. When sorting, viability dyes are a must, and dump channels are strongly encouraged. Also, simple sample preparation tips to reduce clumping should be employed. These include adding a small amount of EDTA to staining buffer, adding 10 Units per mL of DNAse I to reduce clumping caused by free DNA and filtration just before the sort.These three factors also influence the yield of the sort. The yield is the number of target cells in the sorted sample divided by the total number of target cells in the original population. The second number is usually calculated based on the frequency of the target population, as defined by the gating strategy, times the number of cells started with.

What Is The Sort Envelope?

When the sort operator is setting up the sort, they will often ask what sort mode to use. The sort mode will impact the sort window, and thus the purity and yield of the sort. Following the recommendations derived from Poisson statistics, the goal is to have 1 cell per every 4 droplets. After the cell in interrogated, the system will used a value called the drop delay, to determine which drop to charge. The predicted location of the cell in the droplet will also impact how big the drop envelope is. This is shown in Figure 2.

Figure 2: Impact of predicted positive event and sort window size.

On the left is when a one-drop window is used. This requires the positive event to be predicted to be in the center of the drop. If the predicted location of the positive event is not in the center, then the drop will not be sorted. As you can imagine, this will reduce the recovery of the event cells as much as 75%. Typically one-drop windows are used when high purity is required, such as for single-cell genomics.

On the right is when a two-drop window is used. In this case the predicted location of the positive event is off-center. With a two drop sort window, the second drop is sorted with the interrogated drop to ensure that the target cell is captured. This is good for high purity sorts with good yields. In these cases, if there is a contaminating cell predicted to be close to the target cell, these cells are not sorted, to ensure the purity of the sorted population.

There is a third sorting option, which focuses on maximizing recovery. In this case, contaminating cells are not considered, and if there is a target cell in a drop, it is sorted. This maximizes the recovery of the target cells, but decreases purity.

Thus, knowing how pure your sorted population needs to be can help determine the specifics of the sort window to be used.

Putting It All Together

When designing a sorting experiment, there are several considerations. One of the first important things is to meet with the sort team and discuss the cells that will be sorted. This way, the proper biosafety procedures are put in place.

Moving on to the downstream application that the cells will be used for can help determine how many positive cells are needed. These calculations are discussed here, and shown in Figure 5. This table also can help determine how many cells to start with. Of course, if the time to sort is going to be too long, one should consider some enrichment steps before going to the sorter. In parallel with the number of cells needed is the purity of the population, which leads to making a decision on the sort window, as discussed above.

Looking at the balancing act of speed, purity and recovery in cell sorting is the third area to consider. Speed of sorting is dictated by the size of the cells, which impact the size of the nozzle and therefore the pressure of the sheath fluid. The pressure impacts droplet generation and following Poisson statistics, sorting should be conducted with an event rate that is 1/4th droplet generation rate.

Decisions as to the sorting mode help dictate the purity and recovery of the target cells. If recovery is the main goal, a sort mode that captures all the target cells, regardless of the presence of contaminating cells will give the most recover, but sacrifices purity. If purity is more important, recovery must be sacrificed to ensure that purity is not compromised. In the extreme case where the downstream application demands the highest purity cells, a more extreme sort mode should be employed which sacrifices all but the most perfectly aligned target cells, reducing recovery even more.

In the end, consideration of all good sorting experiments starts with the needed outcome – how many cells are needed for the downstream application. From that determination come all the next considerations and calculations. Taken together, the number and purity of the target cells is balanced by speed of the sort. Since the sort speed is often fixed based on cell size, the consideration of the sort window becomes important in helping to define recovery and purity.

To learn more about From Purity To Biosafety, Understanding The Cell Sorting Process, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Using Begley’s Rules To Improve Reproducibility In Flow Cytometry

$
0
0

Written By: Tim Bushnell, Ph.D.

Isaac Newton was famous for saying “If I have seen further than others, it is by standing upon the shoulders of giants.” Implicit in that statement is that the information that the giants provided was reproducible. In fact, reproducibility is central to the scientific method and as far back as the 10th century, the concept of reproducibility of data was being discussed by Ibn al-Haytham.

In 2011, Prinz et al. published an article that indicated a case study looking at reproducibility by Bayer Healthcare found only 25% of academic studies were reproducible. This was followed up in 2012 by a report from Begley and Ellis that indicated on 11% of 53 landmark oncology studies were able to be replicated. So it seems that while we are trying to see farther, our lens may be out of focus.

Bruce Booth, writing for Forbes, published an article called “Scientific Reproducibility: Begley’s Six Rules” and in this article, he proposed the following 6 rules that should serve as a roadmap in evaluating scientific work, both published and your own work. These rules are:

  1. Were the studies blinded?
  2. Were all the results shown?
  3. Were the experiments repeated?
  4. Were the positive and negative controls shown?
  5. Were the reagents validated?
  6. Were the statistical tests appropriate?

While these rules are focused more on clinical trials, they are readily adopted for basic scientific inquiry. By starting to think about these questions in the early stages of discovery and into pre-clinical studies, which should increase the confidence and reproducibility of later stages of the process.

Using Begley’s Rules

Reproducibility is a mindset, it’s not one simple tweak and the data is reproducible. It is a matter of critically evaluating each process in the experiment and identifying areas that can be improved. It involves complete communication of the process. It involves relying on well-developed and documented standard operating procedures that everyone involved in the project are trained on.

Turning our attention back to Begley’s rules, how can these rules help you improve your research? They help provide a roadmap on how to design, validate, execute and report experimental data in a way that is more robust and reproducible.

1. Take the first rule, “Were the studies blinded?

This is a critical component of clinical trials. In blinded studies, the subject does not know if they are part of the control group or the experimental group. In a double-blinded studied the experimenter also does not know what group the subjects are part of.

This helps prevent experimenter bias impacting the data. In the research setting, this technique is not often used, but with a little coordination within the laboratory, this could be implemented in the research setting.

2. Thinking about the second rule: “Were all the results shown?”

Flow cytometry is a data-rich technology and numbers are the name of the game. Experiments looking at the change in percentage a population or the change in the expression pattern of a given protein.

For this reason, the results of any experiments can often be summarized and presented as a table or graph that provides statistical information about the experiment, which is used to support (or refute) the thesis of the argument. A histogram or bivariant gating strategy is useful, but the meat of the argument will be in these summary figures, such as the one shown below.

Figure 1: Summary figure showing all the results of an experiment measuring the the changes in CD4+ cells after drug treatment. All the data is shown with the mean and standard deviation indicated. The number of data points and the p-value between the two datasets is indicated.

In addition to showing the data, thanks to the support of the Wallace H. Coulter Foundation and ISAC, there is a public database where flow cytometry data can be deposited. The Flow Repository allows researchers to upload their data for their published experiments. This allows for all researchers to review the data that the paper is based on, thus improving the ability of researchers to repeat and extend findings of interest.

3. In lines with showing all the data is the thir rule: “Were the experiments repeated?”

For any experiment, it is critical that there the experiments are replicated. This becomes the ‘n’ in any graph and helps evaluate how robust the experiment has been tested. Based on discovery-based work, estimates of the magnitude of the different and the expected variance in the data can be estimated. This, in turn, allows for a Power calculation, which can help guide the researcher in determining the ‘n’.

The smaller the difference that the researchers wish to test, the more samples that they will need to run. The program Statmate is one useful tool for performing these calculations. Figure 2 shows how to determine the number of samples to run based on the Statmate output.

Figure 2: Statmate output used to determine the number of replicates needed.

4. This leads to the fourth rule: “Were the positive and negative controls shown?”

In flow cytometry the controls that are used to determine the population of interest are very important to show. Since gating is a data reduction technique, incorrect gating can impact the data and conclusions.

Without showing and explaining the use of the controls, gating is more a subjective art than an objective evaluation. For those starting out in flow cytometry, using the data available in the Flow Repository along with the paper it came from is a good way to practice. The OMIPs are especially useful for this purpose.

5. Next is to examine the fifth rule :“Were the reagents validated?”

When thinking about flow cytometry, reagent validation is a critical step in the validation and optimization of any polychromatic panel. This is especially true of the antibodies used in experiments.

In Bradbury and Plückthun’s commentary in Nature, the authors estimate about 50% of the money spent on antibodies is wasted due to the quality of the antibodies. Issues with antibodies can include cross-reactivity, lot-to-lot variability and even the wrong antibody for the application.

With the advent of recombinant antibodies, this should begin to become less and less of an issue, but it will take time to for these reagents to penetrate the market. At a minimum, every antibody that comes into the lab should be tested and titrated to ensure the reagent is working properly.

Beyond the antibodies, any other reagent that is being used should be tested and validated. This includes the flow cytometer. While not a reagent, per se, it is essential to gathering the data and the results of the quality control being performed on the system should be accessible to the investigator. In fact, the investigator can build into their procedures their own QC steps that show the instrument and assay are working.

6. The last of Begley’s rules is “Were the statistical tests appropriate?”

All researchers want their data to be shown to be statistically significant because there is an inherent bias in published articles. It was shown by Dickersin et al., in their 1987 paper that papers containing data shown to be statistically significant were 3 times more likely to be published. Issues with HARKing and p-Hacking are troubling but can be reduced or avoided with a simple change to the mindset in experimental design.

Before any experiments are performed, it is critical to consider the outcome and how one would validate the hypothesis being tested. Doing this at the beginning of the process, rather than towards the end, allows the researcher to define the statistical testing that will be used and the threshold for significance.

Any deviations from this plan need to be reported so that readers can understand and evaluate the statistical analysis. Second, by defining the power of the experiment, this reduces the potential to stop collecting data early when the results support your hypothesis. While outliers can be very interesting in their own right, define a rule for excluding them from the analysis and report it.

In summary, Begley’s rules are a useful tool to use to evaluate the quality and reproducibility of data. It helps you look for important issues in a report and your own experiments. Couple these with the best practices in flow cytometry, and you are well on your way to improving the rigor and reproducibility of your work.

To learn more about Using Begley’s Rules To Improve Reproducibility In Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

3 Ways To Improve Flow Cytometry Troubleshooting

$
0
0

Written by: Tim Bushnell, Ph.D.

I spend a lot of time working with my colleagues and clients retrospectively troubleshooting their data. This involves trying to understand and explain what might’ve happened during the data acquisition that led to the results in question.

During these discussions, I give them tips and strategies to improve their acquisition and data quality over time because we want to make sure that people have the highest quality data they can. This means that the researchers should understand and follow the best practices and are taking the time to make sure that their data is collected correctly.

There are three main areas that you need to think about when troubleshooting your experiments…

1. What do you before you start collecting data?

When you sit down at the cytometer, what are the steps that you go through to make sure the cytometer is ready?

Do you check the quality control logs?

Do you ask the operators for the quality control to see how the machine’s behaving since the last time you ran the system?

Or, better yet, do you have some sort of quality control built into your assay?

This could include a bead, such as the Spherotech peak 6 bead, which is a very useful tool. Any standard bead you choose is useful for QC, as long as you use it in a consistent manner.

Here are the steps I recommend you follow

  1. When you sit down at the instrument put on a new tube of distilled water. While you are setting up your templates, let this run on high to make sure that whoever ran the system last cleaned it properly, and there’s no bleach there.
  2. Put your QC bead (peak 6 bead) on and open your target template for the assay. Run that bead to make sure that you’re hitting the target values that you set when you established the baseline during the development phase of your panel.
  3. Monitor what voltages that you’re using and if they’ve been significantly different than the previous runs, stop there before you do anything else. You need to figure out what happened.

Figure 1: Anatomy of a User-defined QC template

Was there an issue with the laser? Was there an issue with the PMT? Was the system not clean? Is there something going on that I need to address before I go on to the second point?

Figure out the problem before starting your experiment.

2. Ensure you have plots of time vs fluorescence for each of the lasers you are using.

The first thing you really want to do is make sure you have plots of time versus fluorescence for each of the lasers that you’re looking at and that you are monitoring that fluorescence to make sure you’re not seeing any problems.

Two big problems you’re going to worry about are clogs in front of the flow cell and clogs in the back end of the flow cell.

Typically, a clog in the front end of the flow cell is going to manifest itself by you not seeing any events at all.

If that happens you have to take your tube off and do some cleaning procedures. This could be priming to flush the SIP or using any sort of automatic declogging protocol that the system may have.

If it gets really bad, you might have to use something like Contrad or bleach to clean out even more, and eventually, you may have to escalate it up to your core staff.

Now if you have a block on the far side of the flow, this causes a buildup of back pressure because the clog has narrowed the tube. The backpressure reducing the speed of the flow cytometer, resulting in the cells missing their time delay windows.

This typically will manifest itself by you losing signal at the farthest laser from the target laser, typically the blue laser is your target laser.

Figure 2: Good vs bad flow. The plot on the left shows good, uniform flow while on the right are the consequences of back pressure slowing the flow rate down, making the cells miss the time delay window.

So say on your green laser, that’s three lasers out, you lose that signal first. You can see that happen when you’re watching your plots.

I also like to encourage people to run their samples in tubes. I know the HTS has an allure of being able to walk away, but if you walk away from your system, you cannot monitor what’s going on, and you can lose something.

Also, a lot of the HTS systems don’t agitate the samples efficiently, so the sample is not always suspended. If you’re running large volumes, the cells might start to settle and you’re going to lose those cells because some of those cells have settled in the dead volume.

So, as much as HTS can save time, it’s better to run the samples in the tube so you can see what’s going’ on.

3. Apply appropriate gating procedures.

What happens if you look at your data after you’ve acquired it? One of the first strategies is to do flow stability gating.

Figure 3: Flow stability gating

If you have walked away from monitoring your run, this flow stability gate is especially important. It lets you identify issues through the run and eliminate them.

Next, go through the rest of your gating strategies and make sure that all the parameters look good. This is another place where having a control, such as a reference control that you know how this control behaves becomes very useful.

Figure 4: Tracking reference control behavior.

With the reference control, when you initially run this tube, it will help identify any issues on the system. During analysis, it helps ensure that the expected gating profiles are being maintained and data within the acceptable range is being generated.

A lot of the troubleshooting is focused on fluidics issues. If you sit down and think about your workflow, and how you might want to add a couple of little tweaks here and there which will ultimately help you improve the quality of your data as well as aid you in identifying issues before they become problems your troubleshooting will be much smoother. Consider these three things, what do you before you start collecting data, ensure you have appropriate plots of time vs fluorescence for each of the lasers your using and apply appropriate gating procedures.

To learn more about 3 Ways To Improve Flow Cytometry Troubleshooting, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

3 Questions You Should Be Asking About Flow Cytometry Controls For Your Experiments

$
0
0

Written By: Tim Bushnell, Ph.D.

What are the controls that you should be considering when planning and designing your flow cytometry experiments?

Are you using isotype controls? Should you be?

Isotype controls are one of the controls used in flow cytometry today and is very controversial. In fact, I encourage people to not use it.

Are you following the rules of compensation and including the appropriate controls?

Compensation is such an important thing to get correct when running your experiments, especially if you’re doing polychromatic flow cytometry, which many of us are doing nowadays.

What types of quality controls should you have in place? Quality control is an important part of the whole experimental process any yet many people don’t worry about it or even think about it.

But if left unchecked, poor quality control can cause major problems.

For example, a colleague of mine was searching for a biomarker for a disease that they were studying. They had done several years worth of analysis and over time acquired many samples. When they went to analyze the data they found what they thought was a potential biomarker.

Exciting!

So they went back to try and prove it. Well, it turns out that they couldn’t prove that the biomarker was actually valuable. When they dug deeper they found out that halfway through their experimental runs, the laser on the system had been replaced – and no one had told them.

There was no procedure in place, so nobody knew the laser had been replaced other than the service engineer.

Ultimately that impacted all of their data and wasted years worth of time, money and effort.

Having the right controls in place would have prevented this very frustrating situation.

Here we dive into 3 questions you should be asking yourself to ensure that you are producing reliable and reproducible data…

1. Should you be using isotype controls?

The theory for using isotype controls is that the isotype control is an antibody with the same isotype as your target antibody but which binds to a non-target. This is supposed to be able to tell you what the non-specific binding is of your target antibody.

But, does it really tell you this? Are Isotype controls really acting as a control in your experiments?

Well, the first question you have to ask is are there primary targets for the isotype control on your cells of interest, and how do you know this?

For example, MOPC-173 is a common isotype mouse IgG2a kappa, it was first published in the 1970s has not been shown to bind any specific target. In fact, the production sheet from one vendor specifically states that this antibody was chosen as an isotype control after screening on a variety of resting, activated, live, and fixed mouse, rat, and human tissues.

Now, how much characterization was done as we continue to subset our populations of interest, we always find new and novel things. Look at the fact that murine reagent B220, thought to be on B cells of mouses has recently been shown to be present on a subset of human B cells as well.

So, looking at broad strokes, usually, because the isotype doesn’t bind, it’s not necessarily encouraging.

Second, do you know that the affinity of the variable region of your isotype control has the same affinity with the variable region of your target control? Now, how do you figure that out?

Third, is, of course, the fluorochrome to protein ratio.

Now, if you’re using PE, yes, the one-to-one is probably reasonable, but in many other cases, especially using things like Alexa Fluor or FITC, we don’t necessarily know what the F to P ratio is, unless you get it directly from the vendor, or you try to figure it out.

While historically, isotype controls have been used to determine positivity, they really only help you show if blocking was effective.

I really discourage the use of isotype controls.

But you might be worried about what the reviewers of your publications will say.

This happened to a colleague who recently submitted a paper for publication, and one of the reviewers criticized the paper for the lack of isotype controls.

We put together a two-page document explaining why the isotype controls were not valid in this case and why they are not useful. We gave it to the investigator to send back to the reviewers. The paper ultimately got published.

If you are not convinced, read this paper by Anderson and co-workers from 2016.

In this paper, they do a really good job of showing you the value — or in this case the lack of value — of isotype controls.

2. Do you have a quality control procedure in place?

Quality control is something that a lot of core facilities do.

Running the instrument, checking how things are behaving, using some metrics.

You might think it’s okay to leave this in the hands of the core facility, but it’s really important that you, the end user, ask about those metrics, look at them, and try and understand what that information is telling you.

As a core director and core operator, you need to look at that data and you can’t just run it and walk away. You have to monitor it over time.

Because if you don’t monitor it over time, you don’t know and won’t notice trends.

Figure 1: User-driven quality control measurement of instrument performnace.

If you’re using OEM protocols, you have to understand the power and limitations of those protocols. So that you can decide yourself when and where you might want to intervene in quality control.

Secondly, quality control improves the rigor of our experiments.

Rigor is a huge component of the whole reproducibility initiative. So knowing that your machine is behaving the same way on a day to day, week to week, month to month basis because of quality control it is essential to have good data for your publications.

Now don’t just rely on the core facility’s personal to do quality control. You, the end user can build in your own quality control metrics as well.

It’s very simple to get a standard bead that you run every single time you run your experiment. The peak 6 beads from Spherotech for example. When you start your experimental protocols, you run that bead and you establish some target values. And then every time you run an experiment on that machine before you run your samples, you put that bead on and you check those target values.

Make sure you’re getting the same target values. If you have to adjust the voltage monitor how much you have to change the voltage and make a note of that. If it’s too large, more than 10%, you really wanna talk to the core facilities.

Find out what might’ve happened. Maybe there was an instrument replacement? A laser replacement? A PMT replacement? Maybe there’s something wrong with the machine?

But before you put your samples on, you’ve done that quality control.

3. Are you following the 3 cardinal rules of compensation?

You need to be following the 3 rules of compensation whenever you run a flow cytometry experiment. The three rules of compensation are as follows.

  1. The control fluorescence must be at least as bright as the experimental fluorescence if not brighter.
  2. The background fluorescence of the carriers of a positive and a negative sample must be matched.
  3. You need to have identical characteristics between the control and the experimental sample.

The first rule, that we need to have the control sample at least as bright as the experimental sample, exists because when we calculate compensation we want to get an accurate measurement of the slope of the line between the negative and the positive. The more accurate that measurement the better.

Figure 2: Compensation Rule 1.

So at a higher fluorescence with more photons being collected we get a better signal. Implicit in the first rule is the fact that the signal is on scale. So we need to be in the linear dynamic range of the PMT detector as well as in the on scale of the detector.

If your signal is off scale you need to think about reevaluating how you set up compensation. Especially when you develop either polychromatic panel, I recommend that you make sure your compensation is accurate. And make sure your compensation signals are on scale.

Figure 3: Inspecting signal to ensure data is on-scale and in the linear region of the PMT.

The second rule, that the background carrier, has to be matched, is important because compensation is a property of the fluorochrome, not the antibody or the carrier. Since the compensation will be based on the background signal, it is imperative that the backgrounds are matched.

Figure 4: Impact of the second rule of compensation.

You can use beads or cells. You can have beads and cells in your sample compensation matrix, but you need to have a positive and a negative for both. Because you need to have the same background fluorescence on the positive sample that is on the negative control sample.

The third rule, which is most important, is that you need to have the same fluorochrome and the same sensitivity.

This means you can’t use FITC to compensate GFP. Even though they both are green and they both are typically measured in the same detector, they are very different spectrally.

More in importantly this is critical for tandem dyes. Which are manmade and will have different variations lot to lot. So you really want to make sure that you use the same tandem dye. Which is why beads are useful.

Figure 5: Different lots of tandem dyes cannot be used to compensate each other.

The second thing is that you have to have the same sensitivity. So that means you need to use the same voltage. You do not want to be reusing your compensation metrics day in and day out because your voltages may change over time. Even a small five-volt change can impact your compensation.

Controls are an incredibly important part of your flow cytometry experiments. If not done correctly, poor controls will waste time and money. But with proper care, high-quality controls will result in high-quality data. Just be sure to ask yourself these key questions, should you be using isotype controls, do you have a quality control procedure in place, and are you following the 3 cardinal rules of compensation.

To learn more about 3 Questions You Should Be Asking About Flow Cytometry Controls For Your Experiments, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

3 Considerations To Ensure Your Cell Sorting Flow Cytometry Experiments Run Smoothly

$
0
0

Written By: Tim Bushnell, Ph.D.

Cell sorting is such an important technique because it’s a gateway tool for many other downstream applications.

Assays such as culturing cells, genomic analysis, proteomics, injections into mice and the like are enabled by cell sorting, allowing researchers to use a homogeneously defined population of cells in their experiments.

With single-cell genomics we can readily characterize a population in great detail, but we have to have a single purified population for it to work.

That’s where the cell sorter comes in.

There are all sorts of applications we can do with a purified population and with the advances in sorting and fluorescent technologies, it is possible to isolate complex phenotypes from rare populations.

Fluorescent proteins are a great example of being able to look at the expression of the inter se or the marker coupled to a fluorescent protein so we can sort.

We can do two or three or four of these GFP, YFP and some of the fruit fluorochrome so you can look at the expression patterns of two or three different proteins at the same time using expression cassettes.

Then you will be able to isolate cells and have one of the express proteins two or all three.

Great technology.

But in order to reap the benefits of this technology, you need to consider a few things about cell sorting first…

1. Size dictates almost everything you are going to do.

The first important thing about cell sorting to remember is that the cell size dictates almost everything you’re gonna do.

And when I talk about cell size, I’m talking about the cell volume, not the flattened out measurement you can make on a microscope, but the volume of the cell.

Now, this is a cell, this is a nozzle for a cell sorter here. And these come in different sizes ranging from 70 micron to over 130 microns.

Figure 1: Picture of a nozzle from a cell sorter.

So what does that mean?

Once I know the volume of my cell and if you assume a cell spherical 4/3πr³, it’ll give us the rough volume that also gives us the radius.

We want to use a nozzle that is four to five times larger than our cells.

So if I have a lymphocyte at 10 Micron diameter, a 70 Micron nozzle will work. If I’m looking at an astrocyte that’s 30 Microns in diameter then I’m gonna really have to be looking at something at 130 Micron range and there are several tables out there that show this.

Figure 2: Relationship between nozzle size and cell type.

Now, once you determine the nozzle size, the sort operators are gonna then set up the instrument and that nozzle size is gonna dictate what the sheet pressure is gonna be.

The larger the nozzle the lower the sheet pressure.

Figure 3: Relationship between nozzle size, sheath pressure, and frequency.

That, in turn, dictates the frequency of droplet generation because there’s a balance between the frequency of droplet generation and the sheet pressure.

If you know the drop frequency of droplet generation you can determine how many events per second you want. So when we express the droplet generation, it’s talking about tens of thousands of droplets per second.

Based on Poisson statistics that we’ve talked about in several blogs here and on Facebook, we want to use an event rate of one event per four droplets. So, if the system is generating 90,000 droplets per second, the event rate needs to be about 22,500.

With a larger nozzle, this may mean sorting at only 4,000 or 5,000 events per second.

With everything set up on the instrument, it is possible to do the back of the envelope calculation to determine how long the sort should take.

Figure 4: Sorting calculations based on population frequency, and frequency of droplet generation.

Estimate how long it’s gonna take to run your sample based upon that speed, based upon how many events you need and based upon the frequency of the population.

There are tables that we’ve published that show that those types of calculations.

2. Sample preparation is key.

The second important concept is sample preparation.

A clog will ruin your day. When the nozzle clogs, it will force the system to shut down. If you’re fortunate, it is a minor delay, and only requires a minor delay in the sorting process. With very bad clogs, this may take hours to clean and get the system back up and running.

So filtering the sample, just before you put the cells on the sorter is a critical final step in sample preparation.

Second, think about what the preparation of your cells is going through.

If you’ve got adherent cells and you’re going to be using trypsin to pull them off the plate, you don’t want to just put serum back to neutralize the trypsin because you’re gonna add back all the compounds Calcium, Magnesium that the cells need to start to adhere again.

Also add a little bit of DNAase, about 10 U/ml because that will help minimize the amount of free DNA causing cells to stick together. In using DNAse for years to help reduce clogging, I have never seen any impact on genomic analysis

So remember these three steps for sample preparation: Filter the cells, use soybean trypsin inhibitor and add DNAse.

3. What type of tube are you collecting your cells in?

The third issue to review is the catch tube. What are your sorted cells going to rest in until the end of the sort? The catch tube is very important.

With electrostatic sorters, the droplet containing the cell is charged, and if the angles are off, it is possible for the droplet to be attracted to the side of the tube, and therefore the cell would die as the droplet evaporated.

To prevent this, coating the tube’s negative charge is one way to minimize this effect. To do this, add your staining buffer to the tube and let it sit for half an hour, 40 minutes or longer, just so that any protein in the staining buffer can coat the plastic and reduce the charge.

The other thing to optimize in the tube is the catch buffer. Using 100% serum isn’t a great idea because those early cells are gonna be sitting in a 100% serum for a while.

Using your staining buffer with twice the amount of protein as you would usually use is a good place to start. However, test different conditions to ensure the best for keeping the cells happy is critical.

With the myriad of downstream applications that are improved after isolation of cells. Taking the time to optimize the experiment to ensure that the resulting cells are of the highest quality for the downstream application. Knowing the cell size will determine the rate at which the cells could be sorted. Knowing the best tips in sample preparation including filtering the cells before sorting, adding DNAse to the sample will help reduce clumping caused by DNA, and using trypsin inhibitors for neutralization of trypsin used to remove adherent cells all help minimize clumping. Finally, making sure that the catch tubes are treated properly to reduce the chance of the droplets sticking to the side while ensuring that the catch solution preserves the cells.

To learn more about the 3 Considerations To Ensure Your Cell Sorting Flow Cytometry Experiments Run Smoothly, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Microscopy – 5 Reasons Coverslips Are Important For High-Quality Imaging

$
0
0

Written By: Heather Brown Harding, Ph.D.

Most people are familiar with coverslips being placed on slides to protect the sample, but that’s not the only reasons that coverslips are important.

They also affect the image quality.

Coverslips function by working with your microscope to focus light to a single point and avoiding unnecessary noise in your image.

Having the wrong type of coverslip will damage the quality of your images and the quality of the data you extract from those images.

So today, we will discuss five reasons why coverslips, and utilizing the right ones, will improve your imaging…

1. Objectives expect there to be a coverslip in the light path.

Objectives are designed to compensate for a coverslip in the light path, and specifically for a 0.17mm thick glass to be in the light path.

This correlates with a number 1.5 coverslip.

You will also find number 1 (0.15mm) and number 2 (0.22mm) coverslips available for sale, but those will not be as effective.

If you check out the Nikon website MicroscopyU, you will see the objective with a numerical aperture of 0.95 will diminish the brightness by almost 80% when there is a variance of only 0.03mm, or a 30 micrometer deviation. Using a #1 coverslip can significantly reduce the brightness of your sample.

Also important to note is that different brands and lines within the brands have different allowable variance within their coverslips, and this becomes important when we’re using high numerical aperture lens or when performing co-localization experiments.

Cell culture plates are often one millimeter thickness, so you can see why this is terrible for attempting to do live cell imaging, or any imaging other than showing the health of your cells.

2. Refractive index is also compensated for within the light path of the objectives.

The refractive index of glass is approximately 1.5, while polystyrene, the most common plastic for tissue culture plates is 1.6.

When we have this mismatch in refractive index, we lose a lot of light through unmatched refraction angles.

This diminishes our signal, so the image becomes more unclear.

You want to get the clearest and most detailed images possible, and to do that you need to ensure that your refractive indexes match up.

This means using plastic tissue culture plates as your imaging coverslip is a bad idea.

3. Thick coverslips take up too much of your working distance.

Many high numerical aperture lenses have a very small working distance where it might only be 300 micrometers or less.

Now, if our coverslip is taking up too much of that, we may not be able to focus through that to our sample.

Similarly, if we have a tissue culture plate that may be one millimeter thick, we physically will not be able to focus through the plate to get to the sample and have an in-focus image.

Too many times, I have seen users put time and resources in preparing their samples, only to find they can’t even focus on their sample.

Why waste your time and money on something as simple as a sample carrier?

Plan ahead and get the right type of coverslip so you can get the images you need the first time.

4. Plastics are depolarizing and light scattering.

This is why we cannot use plastic for DIC imaging, because plastics are depolarizing and scatter light.

If you have apolarizers in a light path you will see rainbows from the stress upon the plastic as well as it will scatter light in many directions, again, losing a loss of signal.

So you end up using more laser power.

But Because you have to up the laser power you might not be able to achieve a good enough signal to noise to have a good image.

5. Plastics introduce autofluorescence.

Many plastics are autofluorescent.

Therefore when we shine a light on the plastic, additional light gets sent back from the sample carrier.

So, if you are trying to get an image of a fluorescent tag or protein in your cells, the autofluorescence of the plastic is going to interfere.

The plastic introduces extra noise into your image and this degrades the quality of the image of your sample.

Which leads to poor results and ultimately a waste of time and money.

It is important to consider the type of coverslip that you are using in your microscopy experiments. Using the wrong kind can have a detrimental effect on the quality of your results. A few of the reasons that the wrong type of coverslip will ruin your experiments are objectives expect there to be a coverslip in the light path, refractive index is also compensated for within the light path of the objectives, thick coverslips take up too much of your working distance, plastics are depolarizing and light scattering, and plastics introduce autofluorescence. Before your next microscopy experiment, double check that you are using the right kind of coverslip.

To learn more about Microscopy – 5 Reasons Coverslips Are Important For High-Quality Imaging, and to get access to all of our advanced microscopy materials including training videos, presentations, workbooks, and private group membership, get on the Expert Microscopy wait list.

ExScope Microscopy Wait List | Flow Cytometry Training


3 Action Steps You Can Take Right Now To Improve Your Flow Cytometry Reproducibility

$
0
0

Written By: Tim Bushnell, Ph.D.

Reproducibility is a key issue in science.

Massive amounts of time and money are wasted when the results of experiments are not reproducible.

For example, I was called into a lab to look at their data because they had spent thousands of dollars sorting precious human samples and were now doing genomics analysis with the isolated cells.

Unfortunately, the results of the genomics analysis made no sense based on the sorted populations. The lab was working backward through every step of the process to try to identify what might have happened and if the experiments were salvageable.

As I reviewed the sorting process, one of the striking factors was that the quality control of the cell sorting experiments was very, very poor. In fact, it was non-existent.

So whoever was running their sorter was not performing quality control on the instrument, so the sorting results were all over the place. Voltages were changed dramatically for each experiment, and the separation of the target cells ranged from barley differentiated to well separated. The compensation controls told another story about the problems with these sorting experiments.

In the end, this lab wasted tens of thousands of dollars, countless man-hours, and precious lost samples because there was not a focus on quality control and best practices.

Reproducibility is a state of mind.

It’s not one simple thing that you do that will make all your data more reproducible, it a shift in the way one thinks about and perform experiments.

With the emphasis on rigor and reproducibility in science, it’s very important that researchers start putting into place everything they can do to help improve the quality and reproducibility of there data.

Here are 3 action steps that can be taken to enhance experimental reproducibility…

1. Evaluate your quality control processes to improve reproducibility.

Quality control is an important component of reproducibility.

That includes monitoring the quality control of the instrument by making sure that quality control metrics are being run on a daily basis.

You, the end user, have a right to ask to look at that data.

Don’t be afraid to go up to whoever’s running the machine and say, “Hey, how’s the quality of the machine? “Can I look at the QC data, and see how it’s going?”

Let them talk to you about it so you understand what it means so that you can get a better feel for what’s going on with your instrument.

Figure 1: Quality control tracking using beads.

You don’t want to sit down at a machine that is not working properly and be unaware of the systems limitations.

Your data will end up looking poor or worse – you might make erroneous conclusions because the machine wasn’t performing properly.

Quality control is done to make sure the machine performs consistently on a day-to-day basis.

That means that you, as the end user, should also be thinking about quality control of your experiments, it is not just the job of the core facility.

At a minimum, researchers can include a bead tandard as a way of monitoring quality control before running the experiment.

2. Develop the assay completely before performing the assay.

When sitting down and to develop an assay, it’s important to work through the whole process.

The first part of that process is understanding what the biology is and what the experiment is trying to prove.

Next, sketch out the proposed primary analysis – what will the gating strategy look like, and what data will be extracted for secondary analysis. If the experiment is a cell sorting experiment, understand what the downstream application that cells will be used for is, and the limitations of that assay.

It is also important to decide what statistical analysis will be performed and calculate the power of the experiment to determine how many samples will be needed. These steps will go a long way to prevent p-hacking and prevent HARKing.

With those steps completed, the next step is experimental design. That includes what instrument will be used, what antigens will be needed and build an initial panel. Reviewing the diagrams for analysis that were drawn above can help identify what are the critical targets. Those should be paired with bright fluorochromes and in channels that have a low error that will allow for a more sensitive measurement. With this in place, the next steps include testing the reagents (titration and voltage optimization). From there, comes the optimization process, where the best conditions for the assay are determined.

Figure 2: Error contribution based on detector and fluorochromes.

After optimization comes validation of the assay. This includes characterization of the necessary controls that will ensure identification of the target populations, demonstrate the stability of the assay, the variation within the staining process and the like. Once the assay is validated, the process is locked down, using Standard Operating Procedures.

3. Ensure you have quality SOPs in place.

For those who are working in a regulated environment, the SOP is part of the daily routine. For others, the idea of a protocol is more common. The biggest difference between the two documents is the level of document control and the expectation of how close the document is followed. With SOPs, they must be followed exactly, and deviations have to be noted and signed. SOPs are not changed without significant discussion and demonstration of the need for change. All details about the reagents are noted, serial numbers of equipment, lot numbers of reagents and more. The exacting SOP ensures that everyone performs the experiment the same way.

A protocol, on the other hand, lists the general steps to follow. It is often changed on the fly, based on the needs of the experiment at the time. It lists recommended controls, but not all may be necessary for the specific assay. The level of detail about reagents and more are not expected.

For those performing a longitudinal study or for a long period of time, developing and implementing an SOP will improve the rigor of the experiment.

An added benefit of the SOP is that it can be used to train additional researchers to assist in the experiment, as they will know exactly what the steps are and how to perform them.

Reproducibility is the name of the game, take a few minutes to think about how you can change your activities and your research workflow to increase the quality and consistency of your data. A few action steps that you can take right now to improve your data’s reproducibility is to Evaluate your quality control processes, develop the assay completely before performing the assay, and ensure you have quality SOPs in place. Taking those few simple steps will protect you from the problems associated with non-reproducible data.

To learn more about the 3 Action Steps You Can Take Right Now To Improve Your Flow Cytometry Reproducibility, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

5 Essential Controls For Reproducible Fluorescent Microscopy Imaging

$
0
0

Written By: Heather Brown Harding, Ph.D.

Fluorescent imaging has the potential to bring great insights into your research project.

But, there is a lot of room for error.

And one of the most common and completely preventable errors that many researchers make is not having the necessary controls.

Having all the necessary controls may seem tedious.

I know in graduate school I was not a huge fan of controls, and would sometimes put off doing all the controls I needed until the end.

However, doing your controls after the fact is a terrible idea.

You could realize that all the data you recorded was just an artifact and that you wasted your time and samples.

It’s incredibly important to plan out your controls and perform them ahead of time so that you know that you have specific staining and you’re not picking up random noise, auto-fluorescence, and non-specific staining, etc.

Here are 5 fluorescent microscopy controls that you should be including in all your experiments…

1. Unlabeled sample.

The first and easiest control to an unlabeled sample.

In an unlabeled sample all you do is take your sample and you fix it, that’s all you do to it.

If you image this sample with the same settings that you use for your experiment and you are getting signal from it, then you have auto-fluorescence.

Now, this can either be dealt with by adding some different reagents to quench the auto-fluorescence or if your signal is high enough, then you know what is auto-fluorescence and what is your signal.

I had a user that did a whole experiment on Arabidopsis, a plant, and turned out that what they were imaging was actually all auto-fluorescence.

It took three days of imaging to discover that all she was seeing was auto-fluorescence.

Three days down the tubes, many samples used and money wasted.

2. Non-specific binding control.

If you’re staining your sample with antibodies, you need to have a control that includes just the secondary fluorescent antibody.

If you get any sort of staining pattern from using just the secondary antibody, then you have unspecific binding occurring.

The secondary antibody is binding to things other than the primary antibody, therefore, this signal cannot be trusted to be specific to the protein you’re trying to label.

If this is happening to you, then you may not be blocking or washing your sample long enough.

So do some testing to find a blocking/washing time that prevents this non-specific of the antibody.

3. Positive and negative control.

Many people forget about the importance of this control, you need to include both positive and negative controls if you’re trying to record any type of phenomenon.

Let’s say you were trying to measure the amount of hydrogen peroxide in the system, then you want to make sure that your reporter is actually reporting this.

So, as a negative control, you would want to quench the signal with something like DTT and then, as a positive control, you can actually add in hydrogen peroxide and see where the top of the range is.

This will tell you what the two ends of the dynamic range are so that you can compare those values with your sample.

If you don’t have a positive and negative control, you can’t confirm if the experiment is working or not. It will also give you a good idea of the sensitivity of your system.

4. Antibody titration curve.

Any time you are using new antibodies in your lab, you want to do a titration curve of both the primary and secondary antibodies.

You need to determine the ideal amount of antibody to use in your experiment.

Ultimately, you will get an ideal image, with the maximum amount of fluorescent staining, when you’re using the smallest amount of antibody.

Why?

Well, if you have too much antibody in your system, you can have nonspecific binding pretty quickly. This will alter the ability to get quantitative data from your sample.

Thus diminishing the confidence in your experiment.

5. Blinded image capture.

Now, in an ideal situation, you would have one person that performs the experiment and stains the samples and then another person who takes the images without knowing what each sample is.

This minimizes any bias that we may have and allows you to have a more controlled experiment.

Now, if you can’t do this, what we often suggest is picking a specific number of places in different locations on your coverslip, and taking the images at the same locations on every single coverslip.

This way you can minimize the bias you have towards only capturing images of pretty cells or of areas that contain the outcome that you expect.

Controls are an integral part of all science. And the complexity of fluorescent microscopy makes including the right controls in your experiments paramount. You should be including these 5 controls in your experiments, an unlabeled sample, a non-specific binding control, a positive and negative control, an antibody titration curve, and blinded image capture. With those controls, you can be sure that your experiments are what you think they are and perform your imaging with confidence. So, happy imaging!

To learn more about the 5 Essential Controls For Reproducible Fluorescent Microscopy Imaging, and to get access to all of our advanced microscopy materials including training videos, presentations, workbooks, and private group membership, get on the Expert Microscopy wait list.

ExScope Microscopy Wait List | Flow Cytometry Training

3 Ways To Measure Cell Death With Flow Cytometry

$
0
0

Written By: Tim Bushnell, Ph.D.

Cell death is a natural part of the lifecycle of a cell. In cases of development, it is critical for the shaping of fingers during human development. The processes of ordered cell death, or Apoptosis, are so important that in 2002, Sidney Brenner, Robert Horvitz, and John Sulston received the Nobel Prize in Medicine for their work on understanding this process. There are many different ways to measure cell death and flow cytometry is an ideal tool for this technique. Whether you are just assessing the viability of your cells or you are interested in the exact stage of cell death your sample is in, there are a variety of ways that you can measure cell death.

The most basic reason to measure the end result of cell death is to ensure the quality of your cell sorting experiment. Sorting dead cells for downstream analysis is a waste of time and money. This will also yield spurious results.

Researchers developing new drugs that can kill cancer cells can use this technique to determine appropriate concentrations of drug to use, what combinations of drugs might be more effective and so on.

Whether you are just assessing the viability of your cells or you are interested in the exact stage of cell death your sample is in, there are a variety of ways that you can measure cell death.

Learn how can you use flow cytometry to measure cell death and get better results in your flow experiments…

1. Viability dyes.

When a cell dies, the cell membrane loses its integrity, allowing anything to enter into the cell. Flow cytometrists can take advantage of this by using cell impermeant dyes to identify the dead cells. However, these will not work if the researcher is staining for an intracellular target. In this case, the use of amine-reactive dyes is called for.

This figure, which can be found on the BioLegend website, shows the difference between the amine reactive dyes (on the left) and the cell impermeant dyes (on the right).

Figure 1: Staining for dead cells using Amine-reactive dyes (left) or cell impermeant dyes (right).

Cell impermeant dyes are typically DNA binding dyes, and can only enter the cell if the membrane is compromised. There are a wide variety of these dyes, and some of the most common are shown in the table below.

Table 1: Common DNA viability dyes

Dye Excitation max (nm) Emission max (nm) Laser(s) (nm)
Propidium Iodide (PI) 535 617 488, 532
7AAD 546 647 488, 532
DAPI 355 460 355, 408
Draq-7 600 677 633

Invitrogen has a large number of other dyes in their SYTOX line, which span multiple excitation lines.

To use these viability dyes, the researcher should add the dye before analysis or sorting. Now some researchers are concerned about adding these dyes to a sample in case they will impact downstream analysis, especially in Genomics. This is not the case for two reasons. First, the amount of dye used for this purpose is much lower (10-100X) than used for cell cycle analysis. Second is the dilution factor. If you sort a cell with an 85-micron nozzle, the droplet size has a volume measured in nanoliters and the core stream is only a fraction of that volume, so the amount of the dye that will be remaining after sorting is negligible.

Figure 2: Cells stained with either DAPI (A) or 7-AAD (B). Data from the Blizard Institute’s website.

Since these dyes require an intact membrane to exclude the dye. This doesn’t work when performing an intracellular staining assay or having to fix the cells before analysis. There have been various techniques used in the past, to varying degrees of success. With the introduction of the amine reactive dyes, the ability to identify dead cells became that much easier. These dyes work by binding to the amine groups on proteins. As shown in Figure 1 above, these dyes will bind to the surface of a cell, making live cells slightly positive. Dead cells, with compromised membranes, allow for the dye to enter the cell, where there are a lot more proteins for the dyes to bind to. These dyes have fanciful names such as Zombie and Ghost as well as the more generic ‘Amine Reactive Dyes.’

Figure 3: Identification of dead cells using the amine reactive dyes (left). The figure on the right is from the Thermo Fisher website showing the staining of live and dead cells.

It’s really important when you’re staining with the amine reactive dyes to stain them in the absence of protein so that the free protein does not suck up the amine-reactive dye.

2. Apoptosis assays.

Programmed cell death, or apoptosis, is the body’s way to eliminate cells in a systematic way. This can be a damaged cell, or it can be part of the normal biological process of development. For example, in utero the fingers are webbed and it is through ordered apoptosis that this webbing gets removed.

In Apoptosis, one of the earliest signals is the flipping of the phosphatidylserines, which face the cytosolic side when a cell is living, but by the action of flippase, face the extracellular milieu when a cell is undergoing Apoptosis. Annexin V is a calcium-dependent phosphatidylserine preferentially-binding protein. When coupled with a cell impermeant dye, it is possible to dissect the stages of apoptosis, as shown in the figure below.

Figure 4: Annexin V staining of cells undergoing drug treatment. Cells in the lower right quadrant bind Annexin and are at the early stages of cell death.

The Annexin V assay lends itself to high content screening assays, which makes it ideal for monitoring cell death in drug screening assays.

Another assay that can be run to look for the earliest stages of apoptosis uses two different dyes. In the case shown below, the DNA binding dye, Yo-Pro-1 will enter the cell before PI can. Using these two dyes, it is possible to identify the apoptotic cells from necrotic cells. Necrosis is disordered cell death, usually due to traumatic cell damage and the release of proteins from the lysosomes (a process called autolysis). The example data, courtesy of Derek Davies, shows an example of how these two dyes work.

Figure 5: Measuring apoptosis and necrosis by flow cytometry. Data courtesy of Derek Davies.

3. Mitochondria dyes.

Another hallmark of apoptosis is the depolarization of the mitochondria and the release of cytochrome C.

Cytochrome C can be measured by intracellular staining using an anti-Cytochrome C antibody. As Cytochrome C is released, the amount of staining will go down. Example data from King et al. (2007) is shown below. In this assay, cells were treated with staurosporine to induce cell death.

Figure 6: Cytochrome C release as measured by intracellular staining.

The process of Cytochrome C release requires the depolarization of the mitochondria. There are a variety of dyes that can measure mitochondrial membrane depolarization including JC 1 or CMX Ross. In this example, cells were stained with CMXRos and counterstained with To-Pro-3, a DNA binding dye. On the left are the untreated cells, while on the right are the treated cells. The depolarisation shows a decrease in the signal of the CMXRos.

Figure 7: Measuring membrane depolarization using CMXRos. Data from Derek Davies.

Cell death is a normal biological process that is amenable to measurement by flow cytometry. Cell impermeant dyes are an absolute requirement for cell sorting experiments, or for live cell analysis. Since dead cells can mimic the target cell, these must be eliminated. For intracellular staining, the amine reactive dyes are an excellent choice.

Annexin V assay is a good assay to measure cell death and is a great tool for looking at cell death in a high-throughput assay. Additionally, there are several DNA dyes that can be used in this process as well. Finally, measuring mitochondrial depolarization is accomplished with several different dyes, resulting in a different measurement for cell death. Overall, flow cytometry is an excellent tool for measuring cell death, and these assays are amenable to being performed in conjunction with immunophenotyping. These three assays are just a sampling of the many other assays that can be used. So explore the world of cell death with confidence.

To learn more about the 3 Ways To Measure Cell Death With Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Avoid Data Loss By Following These Steps To Set Your Flow Cytometry Gates Correctly

$
0
0

Written By: Tim Bushnell, Ph.D.

How are you defining your gates?

When you do primary analysis or single tube analysis, it’s important that you make sure your gates are set correctly.

You need to know that you can find the populations that you’re interested in so you can extract the appropriate data.

This is what allows you to do your secondary or statistical analysis confidently.

What Gates Should You Be Setting Up In Your Flow Experiments

At the beginning of the experimental design process, it is a good idea to sketch out a hypothetical gating strategy. This sketch will help evaluate the panel and ensure that those markers that need good resolution are visualized with brighter fluorochromes.

In any gating strategy, gates to address machine and processing issues should be used to exclude those events that could confound the analysis. These gates include a flow stability gate, a doublet discrimination gate, a ‘schmutz’ gate and finally a gate to eliminate dead cells and if a dump channel is in the panel, this is where it can be included.

Figure 1: Gating Strategy showing the primary gates (top) and secondary gates (bottom), along with FMO controls (smaller plots bottom right).

Once these gates have been used to clean up the data, gating using the antibodies comes next. The first couple of gates are typically the major lineage markers (e.g. CD3, CD20, etc), followed by the subsetting markers, which are used to identify the population that the experiment has been built around.

The flow stability gate is going to help you identify where there were issues with the flow rate. But if you don’t have good flow at the beginning, this data may impact data interpretation. Sometimes, at the very end of your run you’ll get events that are falling off because the tube is dry, so you want to eliminate those events as well. This is especially important if you have discontinuity or issues during the run because you had microclogging. With advances in analytical packages written in R and other languages, this process has been automated. One of these programs is called FlowAI. In the figure below, the raw data is shown on top. After FlowAI is run, two gates ‘FlowAIBadEvents’ and FlowAIGoodEvents’ are generated and downstream analysis can continue.

Figure 2: Automatic removal of events that are identified as ‘anomalies’

From this flow stability gate, pulse geometry can be used to remove doublets. As a reminder, when a photon is detected, an electronic pulse is generated. This has three characteristics – a pulse height, a pulse width (or time of flight) and the pulse area – the integral of height and width. If two cells stick together, the pulse width will increase, so the pulse area will increase without a proportional increase in height.

Figure 3: The electronic pulse of a single (top) and a doublet (bottom) showing how the different parameters would change.

The third gate here is called this the forward scatter gate. You can use this gate to get rid of small pinocytotic cells that are side scatter complex and forward scatter small. This gate is also used to get rid of the debris in the lower left-hand corner, as well as events that are off the scale. You should use the forward scatter gate as a cleanup gate and let the antibodies do the heavy lifting.

The last gate is the viability and dump channel gate.

Using , these four gates, 9flow stability gate, a doublet discrimination gate, a ‘schmutz’ gate and a gate to eliminate dead cells or a dump channel) will get you to the point where you can then start doing additional analysis and use antibodies to identify populations.

Remember gating is a data reduction tool so a good rule of thumb is to be generous with each gate in the initial analysis. Tighten these gates up based on the consequences and results of the impact on your target populations and based on the appropriate controls.

Start with major subset identifiers, and then get more and more specific until you meet the population of interest. Then you can take that population on for your secondary analysis.

Once you have chosen the types of gates you are using you need to determine how you will define those gates. Here are 3 things to remember when defining your gates…

1. Don’t use an isotype control.

Historically, researchers used a tool called the isotype control to define positive from negative. The theory for using isotype controls was that an antibody with the same isotype as my antibody of interest will reveal the non-specific binding of my target antibody.

A useful control should be one that changes a single variable. In the case of an isotype control, there are some large assumptions that are made which invalidate the utility of this reagent as a useful control.

  1. That the isotype control has the same affinity for off-targets as the experimental antibody.
  2. There are no primary targets for the isotype control
  3. That the fluorochrome to protein ratio is the same for the isotype control and target antibody.

These are some pretty big assumptions. Take for example the mouse IgG2a, 𝜅 isotype control clone MOPC-173. This was first described in the 1970’s, but no target was identified. Vendors selling this reagent generally state that it has been tested against major lineage subsets under various conditions. Does this testing resolve assumption A and B?

Looking at the F/P ratio, for some reagents (PE-labeled, for example) steric hindrance prevents more than one label per antigen. However, for smaller molecules, different antibodies can have different optimal F/P ratios. This figure, from ThermoFisher’s website, shows this issue clearly. This graph plots the F/P ratio of two different molecules (FITC and AlexaFluor 488) and the impact on fluorescene. Notice that the FITC tops out at ratio of 6 for this molecule, whereas Alexa488 keeps getting brighter. Thus, the brightness of your reagent is tied to the F/P ratio. If the F/P on the target antigen is lower than for the isotype control, the observed signal for the isotype could be higer for this reason, not due to more binding.

Figure 4: Impact of F/P ratio on fluorescence signal.

Reviewing the literature in Maecker and Trotter’s paper from 2006 they show the binding of three different Isotype controls (PE-labeled) compared to unstained cells that are gated on ‘small lymphocytes’. Notice the differences in background staining.

Figure 5: Isotype control staining from Figure 2 of Maecker and Trotter (2006). The red line is set at 102, based on the peak of the unstained control (added for comparison).

The authors conclude that “…it is a hit-or-miss prospect to find an isotype control that truly matches the background staining of a particular test antibody…

In a paper by Andersen and colleagues (2016), this point was further driven home. These researchers were looking to optimize the blocking of the cells they study. In figure one from this paper, the authors show the staining anti-Tie2, which is known to be expressed at low levels on the target cells. An isotype control was included in the analysis and based on that isotype control staining, the cells would be considered Tie-2 negative. The authors state that the cells were stained with antibodies that were “…same isotype, fluorescence conjugation, and manufacturer, at the same staining concentration (2 μg/mL)...”

Figure 6: Erroneous isotype control staining

The authors conclude their paper stating “…Due to the unpredictable nature of isotype controls, we recommend not to use such controls for determining background signal, or for gating in flow cytometry. Instead, nonspecific staining should be alleviated by use of a blocking reagent….”

The only valid use of an isotype control is to demonstrate that the samples were blocked sufficiently. It should not be used for setting positivity.

2. Use a fluorescence minus one control.

The fluorescence minus one control (FMO) is a control where cells are labeled with all the staining reagents except one. This empty channel can be used to determine the impact how the sensitivity of the channel is impacted by the other flourochromes in the experiment. An example of a PE FMO control is shown below.

Figure 7: PE FMO Control

The red line represents where the negative/positive line would be if the unstained cells were used to determine this. However, in the middle panel, which has not been stained with PE, there are cells above this line. These cells cannot be positive for PE, since there is none in the stain buffer. Thus, the correct boundary is the blue line. The black arrow shows the spread of the data caused by spillover from the dyes in other channels into the PE channel.

The FMO control is essential when the accurate determination of positive is critical. This can include rare events, emergent antigens, and dim markers. During the optimization phase of panel design, it is recommended to run all FMO controls to determine which controls are critical for identifying the target cells of interest. When the panel moves into validation, only those FMO controls that will be needed are used.

3. Use an unstimulated sample.

When performing a stimulation experiment, researchers can take advantage of the biology of the system to set positivity from negativity using an unstimulated control. This is demonstrated in the figure below, from the 2006 Maecker and Trotter paper, figure 3.

Figure 8: Using an unstimulated control to set positivity.

The blue line represents where the isotype control positive/negative boundary would be set and the red line represents where the FMO control boundary would be set. Notice how the FMO would overestimate the responding cells and the isotype control would underestimate this value.

In this case, the unstimulated cell takes into account the spread of the data that the FMO reveals, and the non-specific binding that the isotype control ‘in theory’ would reveal. Having a biological control is an excellent addition to an experiment, and should be explored during the optimization phase of panel design.

One last recommendation to help set the gates. When gating on an unstimulated or FMO gate, consider using a cutoff percentage. This is where the researcher sets a gate such that there is no more than some percentage of events in the control gate. A good cutoff percentage is 0.1%, which is derived from the critical values of a normal distribution. Events in this region are greater than 3 times the standard deviation of the distribution away from the mean.

Gating is a critical process in data analysis. Identifying the correct populations to extract information for secondary analysis is essential for robust analysis and conclusions. Use the FMO and biological controls while avoiding the temptation to use an isotype control for setting positivity. Since gating is a data reduction technique, it is important that these gates are set correctly, and importantly explained in a publication where the data is used. Consider the MiFlowCyt standard as a way to communicate this information. If you do those things you can set your gates with confidence and have high quality downstream results.

To learn more about how to Avoid Data Loss By Following These Steps To Set Your Flow Cytometry Gates Correctly, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

The Right Way To Read A Flow Cytometry Scientific Paper

$
0
0

Written By: Tim Bushnell, Ph.D.

Scientific reproducibility and the public’s confidence in scientific results is critically important. One has to look no further than the website Retraction Watch to learn of the published papers that are being retracted for a host of reasons. Reasons for retractions can range from plagiarism to scientific misconduct. The cost of these retractions is more than losing a paper, it can lead to financial penalties and even prison. In March, 2019 Duke University settled a lawsuit brought by the US government for over 100 million dollars arising from fraudulent data being used by a researcher in grants and papers from that institution over the years.

These types of stories erode public confidence in scientific data and lead to issues like the discredited link between Autism and vaccination. If you couple this the issues of reproducibility that have been discussed in several articles like this one.

But right now science is facing a reproducibility crisis and the public’s trust in scientific results is faltering. As scientists, it is our responsibility to do everything we can to address these issues.

Recently, a colleague of mine forwarded me a tweet that had been sent by a journal sharing a paper this publication recently published. The tweet implied that this type of research was cutting-edge and so was the journal.

And he asked me, “What do you think of the flow data?” And I was aghast that this paper had been published based upon the quality and the lack of information, and the lack of controls, and the lack of consistency in the data.

It was disappointing to see that a paper like this made it through peer-review. Of course, it doesn’t help when stories like this one highlight another issue in science. It can be easy to set up a website and start a journal in an effort to make easy money. These predatory journals are characterized by charging fees to the scientists seeking to get published. There is a list of these journals here.

When you are reading a paper, look at three areas that should get a critical read in helping to assess the quality of the data and commitment to reproducibility that the authors have.

1. Methods section.

The methods section can reveal a lot about the paper. Here is where the details of how the experiments were performed and the information needed to assess how much confidence one should have in the data. When reading papers, there are a couple of critical areas to check out.

  1. Did the authors describe the instrument? Understanding the instrument characteristics is important to help assess the quality of the fluorescent data. Sub-optimal excitation lines or poorly chosen emission filters can call into question conclusions based upon poor sensitivity the instrument in question would have to the target fluorochrome.
  2. Did the authors mention antigen and clone name? Mentioning CD3 from Company X in the methods section doesn’t provide enough information to reproduce the experiment. Take the common target anti-Human CD3k there are at least 7 different commercially available antigens. Using two different sources – Benchsci.com and the OMIPs, there is no clear clone that is used more than another so if the paper says CD3, which one are the authors referring to?
  3.  

    Figure 1: The distribution of anti-CD3 clones used in publications and the OMIPs

  4. What controls were used? Gating data to identify the populations of interest is a critical step in the data analysis process. Proper gating requires the use of the best controls. These include FMO controls, reference controls, unstimulated controls and more. These should be spelled out and how they were used. In the figure below, shows the comparison of the unstimulated, FMO and isotype control in setting gates on a simulated sample.

Figure 2: Comparison of three different controls to determine the appropriate gate. The data is from Maecker and Trotter (2006). Red and blue lines are added for illustration.

The FMO control is designed to identify the spreading of signal in the channel of interest and is illustrated by the red dashed line above. The isotype control is supposed to address the background staining and is illustrated by the blue dashed line. The isotype control is not recommended.

On the fully stained sample, the FMO control would overestimate the number of positive cells. However, the unstimulated sample (quadrants) addresses both the spectral issues and any NSB issues for the target antigen. This is the type of detail that needs to be in the methods section to improve the reproducibility of the data by other researchers.

Other important details in the methods section include was a viability dye used, what software was used for analysis and more. The methods section should provide sufficient detail that the data can be reproduced.

2. Results section.

Moving to the figures and results section, the information from the methods section should prepare the reader to know what information should be presented in results. One of the most common mistakes made in papers is how the figures are labeled.

Don’t rely on the labels that come with the FCS header. Each machine has its unique labeling system which may not be correct when it comes for publication. Take, for example, an instrument that has a channel labeled PerCp, and the published figure has PerCp on the axis of the bivariate plot – but there is no PerCp mentioned in the methods section. What does this mean? The axes should be labeled with the antigen name and ideally the excitation line and emission filters to make sure the reader knows the correct details.

A second point to look at on the axes of bivariate plots. Looking at the axes can indicate if the data has been transformed in some manner. This is a big issue in papers, that they do not reveal the transformation or indicating that all data had the same transformation applied to the data.

Another area to review in the results section is the gating strategy. The use of pulse geometry is a powerful way to help remove doublets. However, if the ASF is set wrong, cells of interest can be lost. Especially if you are using larger cells, the ASF needs to be adjusted. The consequences of poorly set ASF are discussed in this paper.

Statistical analysis is another area to examine closely. The hypothesis should clearly be stated, along with the threshold and the type of test that was used. The number of replicates should also be listed. This information will help the reader assess the strength of any results.

3. MIFlowCyt standard and the Flow Repository.

As cytometrists, we have a tool that can be used to help improve the communication of experimental information. This is the Minimum Information for a Flow Cytometry Experiment or the MIFlowCyt standard. The development of this standard was under the auspices of the ISAC Data Standards Taskforce and represents a trend in biomedical sciences for these comprehensive checklists to ensure the authors include critical information.

Using the MIFlowCyt standard requires the researcher to include descriptions about the sample, about the instrument, and the experimental overview in the data analysis.

At the present time, the MIFlowCyt is only used by a handful of journals from publishers Wiley-Blackwell and Nature Publishing group. For articles submitted to Cytometry A, if the paper is submitted and is MIFloCyt compliant, it gains a special distinction, and a badge to indicate the paper is compliant.

In the case of publishing in another journal, investigators can encourage other journals to adopt this standard.

Another way for researchers to improve the quality of publications and confidence in data is to share the information in a way that others can access it. The flow repository that was started as a partnership between ISAC with the support of the Wallace H. Coulter Foundation. Investigators can upload their data to this database. The data can remain embargoed until publication, but a special access link can be provided to the reviewers so they can have access to the full data as they make a decision on the paper.

Once a paper is published, the data can be released and anybody can access it. This allows researchers to review the data a paper is based on. Further, it can be used to train new researchers in analytical techniques using published data.

Looking to see if a paper is MIFlowCyt compliant, and the data is archived in the Flow Repository is an excellent way to gain confidence in the conclusions found in the publication, or identify how the analysis was skewed in one way or the other.

When you are reviewing papers, Glen Begley’s rules, which were discussed by Bruce Booth are an excellent source of the way to review a paper. These 6 rules are a powerful tool to aid the researcher and help demonstrate the reproducibility of the data.

In the end, no one wants to be the headline in Retraction Watch. No one wants to see their institution to be fined 100 million dollars because of data fraud. Read every paper with a critical eye, and doubly so for your own manuscripts. Even if not submitting a paper to a journal that doesn’t use the MiFlowCyt standard, it is a good practice to annotate each paper in this manner. In so doing, the authors will go through and make sure none of the critical information is missing. With estimates of the wasted money due to irreproducible results in the 70-90% range, it is essential that researchers do better.

To learn more about The Right Way To Read A Flow Cytometry Scientific Paper, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Viewing all 298 articles
Browse latest View live