Quantcast
Channel: ExpertCytometry
Viewing all 298 articles
Browse latest View live

The Difference Between Linear And Log Displays In Flow Cytometry

$
0
0

Written by Tim Bushnell, Ph.D

Data display is fundamental to flow cytometry and strongly influences the way that we interpret the underlying information.

One of the most important aspects of graphing flow cytometry data is the scale type. Flow cytometry data scales come in two flavors, linear and logarithmic (log), which dictate how data is organized on plots. Understanding these two scales is critical for data interpretation.

Let’s start at the beginning, where signal is generated, and trace its path all the way from the detector to the display.

Behind every flow cytometry data point is what we call a pulse. The pulse is the signal output of a detector generated as a particle transits the laser beam over time. As the cell passes through the laser beam, the intensity of the signal from the detector increases, reaches a maximum, and finally returns to baseline as the cell departs the laser beam. The entirety of this signal event is the pulse (see Figure 1).

Figure 1: The voltage pulse begins when a cell enters the laser, hits its maximum when the cell is maximally illuminated, then returns to baseline as the cell exits the beam.

This is all good, but an electrical pulse is not useful to us in and of itself. We need to extract some kind of information from it in order to measure the biological characteristics we are seeking. This is where the cytometer’s electronics (which contribute significantly to a particular cytometer model’s performance heft and price tag) come into play.

Modern instruments employ digital electronics. This means that the signal intensity over the course of a pulse is digitized by an analog-to-digital converter (ADC) before information is extracted from it.

This was not the case in the past, when most systems used analog electronics. In analog systems, the information about a pulse is calculated within the circuitry itself, and is digitized for the sole purpose of sending the data to the computer for display.

Regardless of the instrument, the type of data provided about the pulse is the same: area, height, and width (see Figure 2). These three pulse parameters are what are ultimately displayed on plots.

Figure 2: Three characteristics of the voltage pulse: area, height, and width.

Area and height are used as measurements of signal intensity, while width is often used to distinguish a single cell from two cells that passed through the laser so close together, that the cytometer classified them as one event (a doublet event).

Typically, on flow cytometry plots, you will see the axis or scale labeled with an A, H, or W denoting the pulse parameter being displayed (e.g. “FITC-A,” “FITC-H,” or “FITC-W”).

It is important to note that all of the pulse processing is performed in the cytometer electronics system, not in the computer.

The reason for this is that the required speed for processing can exceed what is possible with the computer and its ethernet connection. Given this, the cytometer passes all of the pulse measurements, already neatly processed and packaged, to the computer and cytometer software that graphs the data.

This is when plot scaling becomes important.

The range of signal levels that the cytometer transmits to the computer is extremely large, and is a function of the cytometer’s ADC. The number of bits of the ADC determines how many values comprise this range of signals.

For example, a 24-bit ADC can divide the range of signals into 16,777,216 (224) discrete values. (Note that each scatter or fluorescence channel gets its own ADC, so the number of ADCs equals the total number of parameters on the instrument.) Therefore, the dimmest FITC signal on this example instrument can be assigned a value of 1 while the brightest FITC signal can be assigned a value of 16,277,216.

Even though the granularity of each signal is assigned 224 different values, this kind of resolution is much too fine to be useful on the scales of plots.

If a histogram’s scale reflected this many values, events would be spread out among so many channels that we would need to collect millions of events to see the peaks and populations we are used to.

Furthermore, computer monitors don’t have the resolution required to draw dots on this scale. Even if they did, the dots would be so small we wouldn’t be able to see them on the screen.

The universally employed solution is to scale down the resolution on plots to a more practical, but still useful, degree.

Instead of dividing the scale into millions of units, we divide it into 256 (or, in some cases, 512) units called channels.

For a 256-channel system, we allocate all 16,277,216 digital values equally among the channels, so that each one contains 65,536 discrete values (16,277,216 divided by 256). Channel 1 can contain up to the dimmest 65,536 events, while channel 256 can contain up to the brightest 65,536 events.

This kind of scale is linear because equivalent steps in spatial distance on the scale represent linear changes in the data. As illustrated in Figure 3, moving a distance of x reflects a change of 64 channels, regardless of whether the starting point is channel 0, channel 64, or channel 192.

As such, the key feature of a linear scale is that the channels are distributed equally along the scale: the distance between channel 1 and channel 2 is the same as the distance between channel 100 and channel 101.

Figure 3: On a linear scale, channels are spaced equally.

Linear scale is certainly nice, but what happens if two populations, with very different levels of intensity, must be plotted together? This is a common situation in flow cytometry, in which nonfluorescent cells are visualized on the same plot as brightly fluorescent cells.

In this case, a plot with linear scaling becomes much less useful, as it will be very difficult to see both fluorescent and nonfluorescent cells at the same time, no matter what PMT voltage we use. Either all the nonfluorescent cells will be crammed into the first few channels, or all the fluorescent cells will be crammed into the top few channels.

This is where a logarithmic scale comes into play.

A log scale is one in which steps in spatial distance on the scale represent changes in powers of 10 (usually) in the data.

In other words, moving up a log scale by one quarter of the scale allows us to move from channel 1 to channel 10 (see Figure 4). Moving another quarter distance up the scale brings us not to channel 20 but to channel 100, a power of 10.

Figure 4: On a log scale, channels are unequally spaced so that one can visualize both high and low signals on the same plot.

Log scales are really good at facilitating visualization of data with very different medians, and are organized into decades. A four-decade log scale is marked: 101, 102, 103, 104, so it contains 10,000 channels in total.

Importantly, even though each channel itself contains the same number of digital values, data channels are not distributed equivalently across the scale.

The first decade, from 100 to 101, contains 10 channels (channel 1 to channel 10). The second decade, even though it occupies the same amount of space on the scale, contains not 10 but 90 channels (11 to 100). And, the fourth decade from 103 to 104 — occupying the same space as each other decade does — contains a whopping 9,000 channels (1001 to 10,000).

On the log scale, data is compressed to a much greater degree at the high end than it is at the low end, and it is this very property that makes it so good for visually representing data with very different medians (see Figure 5).

Figure 5: Effects of Linear vs Log scaling on resolution of 8-peak beads. The Spherotech 8-peak bead-set was run on a DIVA instrument with either Log scaling (left) or Linear scaling (right). The 8th peak was placed, on scale, at the far right of the plot. As can be seen, without log scaling of the data, the bottom 6 peaks cannot be resolved.

It is very important to keep in mind that in the digital cytometry world, these scales are solely visualization methods and, like a compensation matrix, have no effect on the underlying data. The scales are applied by the cytometry software, not the cytometry hardware.

Incidentally, this was not the case in older analog systems which applied the logarithmic transformation in the cytometer electronics using logarithmic amplifiers, so the data streamed to the computer was already “log transformed” before it got to the software.

At this point, you are probably wondering about the practicalities of these scales: when should you use linear scale and when should you use log scale?

Typically, linear scale is used for light scatter measurements (where particles differ subtly in signal intensity) and log scale is used for fluorescence (where particles differ quite starkly in signal).

However, it is not always this simple.

For most flow cytometry on mammalian cells, the range of both forward and side scatter signals generated by all particles in a single sample is not wide enough to warrant a log scale for proper visualization.

Particle size may range from a few microns to 20+ microns in a typical sample, so the entire gamut of particles would be happily on-scale using a linear scale. In fact, log scale would be counterproductive in this situation, compressing the range and making it difficult to differentiate different blood cell populations from each other, for example.

However, side scatter on a log scale can be extremely informative, especially when measuring “messy” samples with many different kinds of cell types, like those generated from dissociated solid tissues.

Additionally, make sure to use both forward and side scatter on log scale when measuring microparticles or microbiological samples like bacteria. These types of particles generate dim scatter signals that are close to the cytometer’s noise, so it’s often necessary to visualize signal on a log scale in order to separate the signal from scatter noise.

Fluorescence measurements typically involve populations that differ significantly in intensity, and thus require a log scale for visualization. This is the case when measuring signal from immunofluorescence, fluorescent proteins, viability dyes, or most functional dyes.

However, there is a major exception: cell cycle analysis. Cell cycle analysis by flow cytometry is usually accomplished by measuring DNA content via fluorescence. Cells in G2/M contain up to twice the amount of DNA found in other cells, so we need to see relatively small differences in signal intensity in order to assess cell cycle state.

Therefore, cell cycle analysis must be visualized on linear scale.

We hope this explanation sheds some light on scaling. Knowing how to properly display your data is a critical part of scientific communication. Remember to use linear scaling for most scatter parameters, or when you need to visualize small changes, and log scaling for most fluorescence parameters, or when you need to visualize a wide range of values. As always in flow cytometry, there are certainly exceptions, but armed with this knowledge, you should be able to make educated judgements about which scale types to use in various assays and to better interpret your data. Happy flowing!

To learn more about The Difference Between Linear And Log Displays In Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training


3 Advantages Of Using The ZE5 Cell Analyzer

$
0
0

Written by Tim Bushnell, Ph.D

Since the first laser was mounted to create the first flow cytometer, there has been a push for more – more lasers, more detectors, and more colors.

Why?

So researchers could ask more complicated questions to squeeze every iota of data out of rare events and precious samples, and so clinicians could expand the diagnostics capabilities of the technology. In addition, this trend has occurred so biotech companies could expand high-content screening for drug discovery.

Instrument manufacturers have brought to the table a plethora of different instruments, with capabilities to suit the needs of the researcher, at price points to make the accountants happy, with all improvements in hardware, software and automation to make the operator’s job easier. Often, these are based on an extension of the available equipment a vendor currently has.

Due to these improvements, the average researcher today has capabilities that were previously possible in only a very few specialized laboratories.

The Democratization of Flow Cytometry

What happens when an instrument manufacturer takes a step back, evaluates needs of different constituencies, and embarks on a journey to build a machine from the ground up, taking the best in technology and workflow to bring something novel to market that enables more users to gain access to the highest level capabilities?

A democratization of flow cytometry, and perhaps science in general.

As a core manager, some of the characteristics that are important in evaluating a potential new instrument include sensitivity, flexibility, and capacity. When thinking about training users, an easy to user interface is a must. If the software interface is too complicated, the system will sit in the corner and collect dust. Automation of common tasks from startup, to instrument cleaning, to quality control, are bonuses.

Researchers need to have a large number of lasers and detectors to ensure current panels can be run and new, expanded panels can be developed. Fast, automated acquisition is important to allow more to get done during the day. And, an easy to use software to reduce the barrier to access is a must.

This is a tall order because, in general, making one decision to improve a cell analyzer can limit the analyzer in other ways.

For example, fast acquisition with multiple lasers requires fast electronics and responsive optics, which can drive the cost of the instrument higher. Likewise, automation of common tasks can require complex software and monitoring devices, which may both increase price and decrease ease of access.

It may seem like an impossible task, but the team of Bio-Rad and Propel Laboratories, collaborated to bring the ZE5™ Cell Analyzer to the market and, with thoughtful design, the Analyzer answers these challenges, resulting in a high-end, easy to use, automated flow cytometer.

3 Advantages Of Using The ZE5 Flow Cytometer

Starting with the standard flow cytometry workflow, the ZE5 is the cytometrist’s cytometer.

On-board and automated quality control systems ensure the instrument starts up at the beginning of the day, and shuts down at the end of the day. More than that, the system has a novel development called the ‘ZE5-EYE’.

This system monitors all the laser spots, making sure the correct filter is in place and, more importantly, continues to monitor the system performance during the day, notifying users if there is a problem BEFORE they run their samples.

Overall, the ZE5 offers 3 unique advantages to cytometrists and end-users alike…

1. User Friendly Software With A Panel Design Tool

The ZE5’s advanced software system is called Everest™. But, don’t be fooled by its advanced capabilities.

Everest is easy to use and intuitive, which means even the most novice of users will not be intimidated as they learn to perform their flow cytometry experiments.

In addition, the software aids in the panel design process, and allows the researcher to view, on the instrument, the best fluorochrome/laser/filter combinations, which is essential for the development of a high-end polychromatic panel.

2. High-End Construction And Advanced Capabilities

With 5 lasers, the ZE5 has the range to take advantage of all commercially available dyes.

Another positive feature is the lasers are liquid-cooled. This helps maintain laser stability, especially on hot (and humid) days in the lab.

The fluidics have been re-thought too. For those familiar with the swappable fluidics of the S3e™ Cell Sorter also from Bio-Rad, it will come as no surprise that the ZE5 has a similar fluidics system. So, if a researcher has a long run, they don’t have to depressurize the system to refill the sheath tanks.

Instead, with the ZE5, you can hot-swap the fluidics with no loss of sample integrity during long acquisitions.

3. Lightening Fast Acquisition Speed Without Sacrificing Quality

Acquisition speed always has been a limitation on many systems. Slow systems increase the time to collect sufficient events to do rare event (and really rare event) analysis, and make rare event analysis tedious and time-consuming.

This is not the case with the ZE5, as seen in the figure below…

Figure 1: This graph shows the acquisition speed capabilities of the ZE5. Bangs Labs, Dragon Green Beads were used in a serial dilution to determine when cell count falls off the theoretical limit. The ZE5 outperforms other systems at higher acquisition speeds as it continues to acquire data into the 100,000 event per second range, whereas the competitive systems fall off around 20,000 events per second. Data and graph courtesy of Karen Helm, MT(ASCP) from the University of Colorado Cancer Center.

The ZE5 tops out at a level up to 100,000 events per second, making those 10-minute run times a thing of the past, in many cases. Even more impressive is the fact that the data spread and loss of events due to hard aborts and coincident events is tolerable at that high rate.

As mentioned, today’s researchers require a large number of lasers and detectors to ensure current panels can be run and expanded panels can be developed. The problem is that improving a cell analyzer in one direction can limit the analyzer in another direction. The ZE5 Cell Analyzer solves this problem with their new flow cytometer that includes user-friendly Everest software, high-end construction and capabilities, and blazing fast, high-quality acquisition speed.

To learn more about the ZE5 Cell Analyzer from Bio-Rad, and how to analyze your cells properly and faster acquisition speeds, click here.

FlowCyto_REV_1000x500

2 Key SPADE Parameters To Adjust For Best Flow Cytometry Results

$
0
0

Written by Tim Bushnell, Ph.D

Mass cytometry panels routinely include 30 or more markers, but traditional analysis methods like bivariate gating can’t adequately parse the resulting high-dimensional data.

Spanning-tree progression analysis of density-normalized events (SPADE) is one of the most commonly used computational tools for visualizing and interpreting data sets from mass cytometry and multidimensional fluorescence flow cytometry experiments.

There are two key parameters in SPADE that you can adjust in order get the best results possible: downsampling and target number of nodes, or k. Knowing how to properly set these values will enable you to enhance the quality of your analysis.

Downsampling

Imagine your data as a cloud of points in high dimensional space, where each dimension is one of the measured markers.

Cells that are similar to each other are close to one another in this cloud, just as similar cells fall together on a biaxial gating plot. This means that the cloud contains dense regions where there are groups of similar cells, and more sparsely populated regions where there are few similar cells.

The cells falling around the edges of dense regions will likely be grouped into the larger clusters during analysis, even if some of the sparse regions contain cell subsets that happen to be small but phenotypically distinct.

Downsampling in the SPADE algorithm reduces the density variation across the cloud in order to give more equal weight to small, less dense groups of cells in the clustering process so they won’t get absorbed into the larger, denser regions.

After downsampling, SPADE clusters the data and then upsamples, in order to map the cells that were removed during downsampling, back into the clusters to which they are most similar.

You can adjust the extent of downsampling by changing the percentage or absolute number. The percentage indicates the percentage of cells you want to keep during the downsampling process.

100% downsampling means that 100% of the cells will be kept, and therefore SPADE will not downsample. 5% downsampling means that only 5% of the cells will be kept for clustering. Lowering the downsampling percentage in this way prevents small or rare populations of cells from being lost in the clustering step.

If you set an absolute number, rather than a percentage, SPADE downsamples until this number of cells remains.

If you’re working with a limited number of relatively large populations, like normal blood cells, you can probably safely leave the downsampling percentage set to the default. However, if you are seeking novel populations of cells, or very small populations like stem cells, you should consider setting the downsampling to a lower percentage in order to prevent losing those populations during clustering (Figure 1).

Figure 1. Downsampling removes density variation to determine which regions of the point cloud constitute discrete clusters. A) Initial data cloud in n dimensions shown before and after appropriate downsampling, assuming five real cell populations in the data. B) Clusters determined after too much downsampling. Low density regions are inappropriately considered to be discrete clusters. C) Clusters determined after too little sampling. Only high density regions are considered to be clusters.

An important consideration is that when you set the downsampling percentage very low, you risk focusing on noise in the data. Sparse regions in the high dimensional cloud might be treated as discrete clusters, when in reality, they represent nothing more than noise.

On the other hand, if you set the downsampling percentage too high, or if you don’t downsample at all, you risk overlooking smaller, “real” populations of cells.

Target Number of Nodes (k)

The second parameter that you can adjust in a SPADE analysis is the target number of nodes, or k. This value indicates the number of populations into which you want SPADE to divide the cells.

Keep in mind that this number is a target, not an exact value, so you may notice empty clusters in the final output if SPADE couldn’t find exactly k number of clusters in the data.

A good rule of thumb is to always ask SPADE for more nodes, or clusters, than you expect to find.

Overclustering in this way allows you to identify potentially unexpected subpopulations that are defined by subtle, high-dimensional patterns of marker co-expression (for example, small subpopulations of T cells in normal blood that are defined by subtle differences in their co-expression of several activation markers).

Additionally, overclustering helps you delineate between major populations because when the SPADE tree is populated by more nodes, it is easier to visualize and determine more precisely where one major population, or group of nodes, ends and another begins.

When choosing k, you should consider how many populations you expect to find, the relative size of those populations, and the total number of cells in the data set.

If you ask SPADE for 500 clusters but only have 1,000 cells and 5 major populations, you’ll probably get back lots of empty clusters as well as clusters with only a few cells each.

It’s crucial to consider the biological implications of what you put into SPADE, and what you get back. For example, is a population of T cells that only has 3 out of the 1,000 cells “real” or significant?

Knowing your data and your biological system can help you decide appropriate cut-offs for k values, as well as what population sizes are likely to be biologically valid, versus just noise.

Another consideration is that small subsets resulting from high cluster numbers can also be more unstable, meaning that the cells’ phenotypic similarity is so subtle that another round of clustering might group them differently, and thus you may find that these populations won’t hold up to further computational or experimental scrutiny.

Conversely, setting k too low can cause you to miss smaller populations, as they’re likely to be merged into the larger, denser clusters of cells (Figure 2).

Figure 2. Target number of nodes (k) affects the number of clusters returned by SPADE. A) Initial data point cloud before and after clustering using appropriate target number of nodes, assuming five real populations of cells in the data. Data is properly overclustered, allowing analyst to manually delineate between major populations. B) Clustering result if target number of nodes is too high. Major populations appear to contain multiple clusters, and lower density regions are designated as discrete clusters, containing very few cells. C) Clustering result if target number of nodes is too low. All cells are grouped into a few large clusters, obscuring smaller populations of cells.

Fine-tuning your SPADE analysis requires a k value and downsampling percentage that will identify small, rare cell populations without blowing out noise in the data.

Going forward with your analysis, it’s always crucial to experimentally validate novel populations that you have discovered, using SPADE or other computational methods.

For further reading, see: Qui, et al. Extracting a cellular hierarchy from high-dimensional cytometry data with SPADE. Nature Biotechnology, 2011.

To learn more about 2 Key SPADE Parameters To Adjust For Best Flow Cytometry Results, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Planning For Surface Staining Of Cells In Flow Cytometry

$
0
0

Written by Tim Bushnell, Ph.D

One of the most common assays in flow cytometry is the surface labeling of cells with antibodies. Often termed “immunophenotyping”, it allows the researcher to identify, count, and isolate cells of interest in a mix of input cells. Every lab has their own favorite protocol, handed down from some hallowed, chemical-stained notebook, and followed as exactly as making a souffle.

The real questions are, which of those steps are critical, and (with changes in instruments and theory) what other factors should be considered when staining cells? This article will focus on staining immune cells, but the principles apply in general, and specific issues for a specific sample type can be optimized in a similar way.

Cell Preparation

A protocol usually starts with a list of equipment that is needed. After that, the next important component is obtaining and preparing a sample. A good, single-cell suspension is essential for quality flow cytometry.

The source of your primary tissue will guide you down a path for processing. For liquid samples like blood, bone marrow, and spleen, it is pretty easy to make a single-cell suspension.

The trick for liquid suspensions comes in determining whether the red blood cells should be lysed, or otherwise removed. There are two schools of thought on this, the first being to remove the cells using a lysis process, or some centrifugation protocol. This has the advantage of removing the large excess of red blood cells, but also carries the potential risk of losing your target cells. This can be especially true if RBC lysis is performed.

The second school of thought is to use an antibody to identify the RBCs (Ter119 for murine cells, for example) and use that to generate a negative gate in your acquisition software.

The next consideration is which buffer recipe to use for staining. Phosphate-buffered saline is important, but to keep the cells happy, protein needs to be included.

The use of 0.5-1% bovine serum albumin (BSA) (fraction V) is encouraged over bovine serum (fetal or otherwise). The advantage of BSA is you know what is in your buffer, whereas FBS may have compounds that adversely affect your cells, and it must undergo more extensive testing.

In the case of tissue or solid tumor samples, the first place to look is the Worthington Tissue Dissociation guide. This web page is a go-to for those working with solid tissues. It is full of material on different organ types, the best ways to make single-cell suspensions, references, and more.

If the cells re-clump after preparation, one may try adding EDTA or small amounts of DNAse (10U per ml). These help eliminate clumping due to cell surface adhesion molecules and DNA released by dying cells, respectively. This is especially true of tissue/solid tumor samples.

In the last few years, the gentleMACS from Miltenyi Biotec has become popular with researchers seeking to make single-cell suspensions from tissue. BD Biosciences offers a similar product called the Medimachine.

Both machines operate on the same principles, in that the sample is introduced into a small tube that is inserted into a machine. These special tubes are like gentle mini-blenders, and mince the tissue very finely. The addition of appropriate enzymes helps to further reduce the tissue fragments to single cells.

Of course, you must still filter your samples to remove residual chunks from the mixture. There are many different filters on the market at various price points, so it is worth shopping around to find what will work best for you.

The crafty and thrifty researcher might even go to a local fabric store and obtain different samples of fabric to test for the ability to be used as an inexpensive filter. Small Parts, which is now part of Amazon, has sheets of 50 micron mesh for sale that, with appropriate application of scissors or rotary cutter, can be cut into filters for use in the lab. Many other sizes are also available for relatively reasonable prices.

There is no reason to not filter your samples and save everyone from the headaches of clogs.

Minimizing Off-Target Binding

The loss of sensitivity of a flow cytometry measurement can be attributed to several factors. While there is no perfect control to measure this loss of sensitivity, we can design protocols to reduce the impact.

Two such changes include using the proper concentration of antibody for staining, and the proper blocking of the cells to minimize off-target binding. In this context, off-target refers to anything that the antibody binds to that is not directly related to the primary target of the FAb fragment, so both low-affinity binding and Fc receptor mediated binding.

In the presence of excess antibody, antibody will bind low-affinity targets. This causes an increase in background fluorescence, resulting in reduced sensitivity for detection.

One way to mitigate this effect is to titrate your reagents. This process ensures that you identify the best concentration for staining (Figure 1). It is recommended that titrations be carried out in the presence of a viability dye, to reduce problems associated with non-specific uptake and binding by these membrane-compromised dead or dying cells.

Titration should be performed under the specific conditions of the assay. If the cells will be fixed, make sure they are fixed for your titration.

Figure 1: Titration of an antibody. A concatenated file of the data is shown on the left, and the signal intensity (the Staining Index) is calculated and plotted against the antibody concentration — the data is shown on the right.

The second cause of off-target binding is the Fc receptor (FcR). Evolved to perform a specific function when interacting with cells expressing the FcR (such as antigen-presenting cells), Fc binding is a specific process, but one that is detrimental to the assay.

There are several different ways to block cells, and in a recent paper by Andersen and coworkers, they sought to determine the best methods for blocking cells. Specifically, the researchers were attempting to optimize the staining of human monocytes and macrophages, which is mediated by the Fc receptor.

The authors demonstrate that isotype controls are not useful in their analysis — welcome news to many who don’t use these as a control.

In determining the best blocking scenarios, the authors tested several blocking agents, including FcBlock, human and mouse serum, and human and mouse IgG at several concentrations. The conclusion of this work is that, for these cell types, 100 μg/ml of human IgG was the best blocking agent, based on effectiveness and cost.

It goes without saying that the addition of a viability dye in the staining process is important for good analysis. Even in a single color experiment (GFP+ cells, for example), having a viability dye is critical to make sure dead cells are not being counted.

Physical parameters, like forward and side scatter alone, are not sufficient to identify dead cells (Figure 2).

Figure 2: Physical parameters alone cannot determine dead cells. In Figure 2A, dead cells were identified based on their DAPI signal. These cells were plotted on a FSCxSSC plot (Black arrow to plot on bottom left). This gate was applied to the top level and displayed on a FSCxSSC plot, as shown in 2B. A second gate was drawn around small, non-complex cells and the two gates used to generate plots of DAPI vs CD3, as shown on the bottom of 2B. This data clearly shows the cells identified by size parameter are not all dead. Without a viability dye, over 30% of the CD3 cells could have been missed.

Choosing and Prioritizing Controls

What is information about staining cells without a reminder about the proper controls that must be run with each experiment?

These controls include fluorescence controls (unstained, and single-stained compensation controls), fluorescence minus one (FMO) controls, reference controls, unstimulated controls, stimulated controls, instrument controls, and more.

One can argue that controls are most important. After all, the controls are necessary for the researcher to properly interpret their data in the context of the experiment. Without these controls, the data is impossible to interpret. This is even more important in light of the ongoing efforts to improve reproducibility and rigor in our scientific endeavours, rising to answer the challenges that researchers are discussing now (Figure 3).

Figure 3: Concerns over irreproducibility of data and wasted resources from Begley and Ioannidis (2015) Circ Res 116:116-126.

In the initial stages of the development of a panel, every effort should be made to include every control that the researcher can devise. That way, as the data is analyzed and the analysis template developed, the controls can be ranked as CRITICAL, USEFUL, or UNINFORMATIVE for data analysis.

Critical controls for a flow cytometry experiment include things like compensation controls and some FMO controls. Without these controls, the results of the analysis are suspect and should not be trusted. In other words, the data can’t be interpreted if these controls are excluded.

Useful controls would include some FMO controls, reference controls, and autofluorescent controls. These types of controls help with the analysis, and can support the conclusions of the critical controls. For example, in a T-cell panel, a CD3 or CD4 FMO control may be helpful, but may be less critical as there are other controls that ensure the cells are properly identified.

Uninformative controls include the isotype control, and poorly designed reference or stimulation controls. These do not aid in analysis, and can in fact have a negative impact. Take, for example, this figure from Andersen’s paper.

Figure 4: First figure from Andersen’s paper showing the staining of anti-Tie2 antibody and the isotype control.

It is known that the cells of interest express Tie2 at low levels, and this is borne out by the data comparing the background with the anti-Tie2 antibody. However, the isotype control shows almost a full log separation, compared to the positive signal, suggesting that these cells don’t express Tie2, which goes against the known data (Figure 4).

As the analysis workflow is developed, make sure to validate the controls, too.

In conclusion, we have discussed several areas to consider when staining cells. This includes ensuring that sample preparation results in a high quality single-cell suspension, validating the reagents to be used by titration, and not forgetting a viability dye. Finally, this also includes identifying the best controls for the experiment, while eliminating controls that are uninformative or otherwise confound the analysis.

Taken together, a focus on the staining of the cells is a critical first step to ensure high-quality data is returned when the experiment is completed. The more attention to detail at the beginning, the lower the chance of suffering from the GIGO effect.

To learn more about Planning For Surface Staining Of Cells In Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

How To Perform A Flow Cytometry t-Test

$
0
0

Written by Tim Bushnell, Ph.D

The ultimate goal of any experiment is to analyze data and determine whether it supports or disproves a given hypothesis. To do that, scientists turn to statistics.

Statistics is a branch of mathematics dealing with the collection, analysis, interpretation, presentation, and organization of data. In applying statistics to, e.g., a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model process to be studied.

One of the first important concepts to take from this definition is the idea of a population. An example population might be all the people in the world who have a specific disease.

It is time and cost prohibitive to try to study all of these people, so the scientist must sample a subset of the population, such that this sample represents (as best as possible) the whole population. How big the population is and what fraction is sampled in the experiment contributes to the power of the experiment, a topic for another day.

Figure 1: Relationship of population, sample size, and statistics.

This sample size, and how it is obtained, should be described before one begins any experiments, as getting the population sampling correct is a critical component of improving reproducibility. Consequences of poor sample design can be found throughout history, such as the issues surrounding the use of Thalidomide in pregnant women.

The second critical component is to identify the question(s) that the experiments are designed to to test. This will lead the researcher to state the Null hypothesis (HO), which is what statistics are designed to test.

An additional factor that should be addressed at the beginning of the experimental process is the significance level (α value) — the probability of rejecting the null hypothesis when it is actually true (a Type I statistical error).

At the conclusion of the experiments, we collect the data to generate a P value, which we compare to the α value.

If the P value is less than the α value, the null hypothesis is rejected, and the findings are considered statistically significant. On the other hand, if the P value is greater than or equal to the α value, the null hypothesis cannot be rejected.

Once the experiments are done and the primary analysis is completed, it is time for the secondary analysis.

There are a host of different tests available, depending on what comparisons are being made and the distribution of the data (i.e. normally distributed, or not.) There is an excellent resource at the Graphpad Software website, makers of Graphpad Prism.

If we wish to compare either a single group to a theoretical hypothesis, or two different groups, and these groups are normally distributed, the test of choice is the Student’s t-Test, a method developed by William Gosset while working at Guinness Brewery.

Using the t-Test, the t-statistic is calculated on the distributions, which is an intermediate step on the way to calculating the P value. The P value is then compared to the threshold to determine if the data is statistically significant.

Assumptions About the Data

The t-Test assumes that the data comes from a normal (Gaussian) distribution. That is to say, the data observes a bell-shaped curve.

Figure 2: A normal distribution.

Although the t-Test was originally developed for small samples, it is also resistant to deviations from the normal distribution with larger sample sizes.
If the data doesn’t follow a normal distribution, a non-parametric test, such at the Wilcoxon or Mann-Whitney test, is best. Non-parametric tests rank the data and perform a t-Test on the ranked data, with the assumption that the ranked data is randomly distributed.

Performing a t-Test

The minimum information needed to perform a t-Test is the means, standard deviations, and number of observations for the two populations. As shown below :

Figure 3: Calculating a t-Test in Graphpad Prism (ver. 7) with input values calculated elsewhere.

The data is collected elsewhere, and the mean, standard deviation, and N are entered into the software. For visualization, a bar graph showing the average and standard deviation is plotted.

Using the analysis feature in the software, the appropriate statistical parameters are chosen (un-paired t-Test, threshold to 0.05 discussed below). The Welch correction is applied because the N’s are different between the two samples.

Prism generates a summary table and shows details in the red box. In this case, the experimental sample is statistically significantly different from the control, and we may reject the null hypothesis.

Another way to perform this test is to enter the data into your preferred program and let the software do the work, as shown below for Prism.

Figure 4: Calculating a t-Test in Graphpad Prism (ver. 7) by entering the data.

This second plotting method has the advantage of letting the reader see all the data points in the analysis.

Final Tips for Performing a t-Test

There are a few variations of the t-Test, based on sample size and variance in the data. One can perform a one- or two-tailed t-Test. The decision to use one versus the other is related to the hypothesis.

If the expected difference is in one direction, the one-tailed t-Test is performed. If it is not known, or the expected difference could be an increase or a decrease, the two-tailed t-Test is performed.

Figure 5: The null hypothesis for either a one-tailed (left) or two-tailed (right) t-Test.

In conclusion, to perform the t-Test, it is critical to start from the beginning of the experiment to establish several parameters, including the type of test, the null hypothesis, the assumptions about the data, the number of samples to be analyzed (Power of the experiment), and the threshold.

The experiments are performed, and only then, after the primary analysis is completed, is statistical testing performed.

Each software package has its specific methods of performing these tests, and we have shown you one (Graphpad Prism). It is recommended that you consult your local statistical community and see what they are using for their analysis.

By establishing the statistical plan at the beginning of the experiment, the planning for the rest of the experiment become easy. Likewise, one does not begin to chase a hypothesis with the data, rather the data stands alone to support or reject the hypothesis.

To learn more about How To Perform A Flow Cytometry t-Test, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

3 Ways The ZE5 Cell Analyzer Accelerates Flow Cytometry Research Opportunities

$
0
0

As new instruments come on the market, vendors are quick to provide data proving the systems’ prowess including sensitivity, speed, and such. These are important characteristics of the instrument, and should be reviewed. However, the real questions that should be asked about any new instrument should look beyond these benchmarks. Specifically, the questions that often come to mind include:

  1. Will the new instrument improve current experimental workflows?
  2. Will the new instrument enable new and novel experimental questions?
  3. Will the new instrument help improve the reproducibility of experiments?

Evaluating the instrument in the context of these questions will help determine if acquiring the instrument will expand the capabilities for the local research community. In the case of the ZE5 Cell Analyzer, it is clear that with the advancements that have been made by the Propel and Bio-Rad teams, this instrument offers significant expansion of capacity, resulting in improved reproducibility of the data.

Several features of the ZE5 stand out as prime examples of why this new instrument is a must-have for the research lab.

  1. Improve Reproducibility — A “Flying Collar Wash Station’” on the ZE5 is designed to wash the sample probe between samples to reduce carryover. For years, researchers have had to manually wash the SIP between samples, to help reduce sample-to-sample carryover. Automating this feature is a huge benefit of the ZE5.By automating the process of cleaning the SIP, carryover is reduced. This in turn reduces one source of data variation. This is even more critical when considering rare event analysis, where sample carryover can potentially skew the data. The data below shows how efficient this system is.
  2. Figure 1: Carryover between samples on ZE5: (A) Lysed whole blood was run on the ZE5 in high-throughput (HT) mode. After each sample, the system carried out an automatic wash cycle of 0.25 sec. outside and 1.75 sec. inside the SIP. A clean tube of water was run immediately after the wash to evaluate carryover. (B) The resulting carryover data showing an average carryover of 0.46% (+/- 0.023%).

  3. Five-laser, 27 fluorescent parameter — More lasers and detectors is an excellent feature and offers improvement for standard assays, enables new assays, and can be used to improve reproducibility of experiments. More detectors allow for a deeper characteristic of a given population. In the case of a hard-to-obtain sample, more detectors allows the researcher to have a larger breadth of characterization, so that the critical data can be obtained without having to split the sample, thus reducing the sensitivity of measurement.With a large number of detectors, the ZE5 can also enable improved labeling of cells by allowing the researcher to ‘“barcode” their samples. In fluorescent barcoding experiments, each sample is labeled with a combination of 2 or more fluorochromes at one of several concentrations. For example, if one uses 2 different fluorochromes, with 3 different intensities (low, medium, and high), it is possible to mix 6 different samples together. A 3 by 3 barcoding results in 9 samples.

    All the samples are mixed together before they are labeled with the antibody mix at the same time, under the exact same conditions. This improves the staining, and thus the reproducibility of your data, and with the added speed the ZE5 has for sample acquisition (see below), barcoded samples can be read in the same time as a single sample on a slower instrument.

    If you are interested, you can read about fluorescent barcoding in these papers by Krutzik and Nolan, and Krutzik et al.

  4. Superfast electronics — The fluidics of the ZE5 can deliver a stable flow rate up to 2.5 μl/second. However, without matching fast processing electronics, the speed (and sample) would be wasted with increased coincident events and a high abort rate. The ZE5 delivers in the speed category, with very fast electronic and a cell laser transit time that is 3x as fast as other systems on the market.

Figure 2: Stability of Signal at high acquisition rate: Beads were acquired at increasing events per second, and singlet beads were gated using pulse geometry gating. The %CV of two parameters (Side Scatter and FITC) were plotted over a range from approximately 4,000 eps to 129,000 eps. The mean and standard deviation of the CVs over this range are shown below.

As can be seen, the electronics are stable, with a tight CV shown through a wide speed range. So, in addition to the barcoding discussed above, the fast electronics and stable flow rates are enabling for rare event analysis. Imagine trying to measure a cell that is found at a frequency of 1 in 105 cells. With rare events like this, the statistics are governed by Poisson distributions, rather than the more familiar Gaussian distributions. In Poisson statistics, it is the number of positive events that is important, not the total number of events.

Figure 3: Time to collect 400 positive events. The time to collect 400 events of a rare population (1 in 105 cells) is plotted versus the speed of acquisition (in events per second).

As this figure shows, to collect 40 million events with a typical flow cytometer is going to take 150 minutes (2.5 hours) for a single sample. However, with the speed of the ZE5, these rare event experiments become possible, as even at a moderately fast rate of 60,000 events per second, collection time drops to less than 12 minutes. Thus, the ZE5 enables researchers to study and characterize rarer cell populations in a reasonable time.

Some technological advances are incremental, while others are significant game-changing tools that offer the researcher the ability to significantly improve current assays while allowing for new and novel avenues of research to be performed. With speed, sensitivity, and capacity to spare, the ZE5 fits into the game-changing category. Reduced carryover, increased speed of acquisition, and a large number of parameters all open up new and novel assays, while improving the quality and reproducibility of ongoing ones.

To learn more about 3 Ways The ZE5 Cell Analyzer Accelerates Flow Cytometry Research Opportunities, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Measuring Receptor Occupancy With Flow Cytometry

$
0
0

Written by Tim Bushnell, Ph.D

The field of medical therapeutics is moving into the area of precision medicine. In a global sense, precision medicine requires the doctor to assess a patient’s unique disease state — the susceptibilities and resistances of the disease targets to the arsenal of medicines at the physician’s disposal.

This is leading the push towards devising more nuanced tools, and an understanding of what specific patient characteristics dictate which tools to use.

For precision medicine to work, we must be able to identify biomarkers that are expressed on diseased cells, but absent on the normal cell.

An example of this type of biomarker is overexpression of Her2 on a subset of breast cancers. The drug Herceptin targets the Her2 overexpressed on these cells. Studies suggest that the binding of Herceptin induces an immune response, as well as causing a G1 arrest, reducing cell proliferation.

The success of drugs like Herceptin and Rituximab is one of the reasons there are hundreds of drugs of this class in development.

The ability to perform quantitative assays in a phenotypically defined cell population, and the ease of moving to high-throughput assays, means that flow cytometry is the assay of choice for assessing the performance of these drug candidates, especially as they enter into pre-clinical trials.

Assessing the appropriate dosing for pre-clinical trials is an especially important step. Too low a dose and the data is not compelling, too high and you run the risk of adverse reactions in your subjects.

While these decisions are based on data from animal models, there is not always good correlation between the two.

Take, for example, this report from a phase 1 clinical trial of an anti-CD28 antibody, TGN1412. Healthy subjects were infused with the drug, at a concentration that animal studies suggested would be only 10% receptor occupancy, which turned out to be over 90% in humans, leading to the subjects ending up in the ICU because of systemic inflammatory response syndrome.

Measuring Receptor Occupancy (RO) by flow cytometry seems a logical step, and has been receiving a great deal of attention in the clinical cytometry world. So much so, that Cytometry B devoted a special issue to this topic.

There are 3 common ways to measure RO, either directly or indirectly. Each provides a different type of data, and the experimental needs will dictate what assay to perform. An additional consideration for method choice is the ability of directly conjugated reagents.

1. Measuring total receptor expression

In this assay, a labeled, non-competing antibody is used to label the cells. This antibody should bind to the same antigen as the target antibody, but not to the same epitope. Using this assay, it is possible to obtain a measure of the total expression on the surface of the cell.

Measurement of total receptor expression

Figure 1: Theoretical measurement of total receptor expression. The presence of the target antibody is irrelevant, the non-competing Ab measures the total expression in terms of antibody binding capacity (ABC). A standard curve is generated using beads with known quantities of ABC and the MFI related to the ABC on the target cells in an experiment.

2. Measuring free receptor expression

In this experiment, the target antibody is labeled. Cells are first labeled with the unlabeled target antibody. This is followed up with the labeled target antibody, which will only bind to free sites. Again, using a standard curve, it is possible to measure the number of free receptors.

Measurement of free receptor expression

Figure 2: Theoretical measurement of free receptor expression. After incubating the cells with the target antibody, the cells are incubated with a fluorescently labeled target antibody. A standard curve is generated using beads with known quantities of ABC, and the MFI related to the ABC on the target cells in an experiment.

3. Measuring receptor occupancy

If the first two assays are combined, it becomes possible to monitor the occupancy of a receptor over time. This is especially useful, as many of the monoclonal antibodies in use have long biological half-lives, and understanding their kinetics is critical for making therapeutic decisions.

Measuring receptor occupancy by flow cytometry

Figure 3: Measuring receptor occupancy by flow cytometry. Cells are first incubated with the target antibody. This is followed by incubation with fluorescently labeled target antibody and the non-competing antibody. After calculations of the ABC, the average receptor occupancy can be calculated.

Of course, if this experiment is performed over several days, curves and kinetics of binding can be readily calculated. There are several different ways this data can be plotted, depending on the question being asked.

Changes in receptor occupancy over time

Figure 4: Changes in receptor occupancy over time, as measured using the assay in Figure 3.

As with every clinical assay, there are a host of additional concerns that need to be addressed in the experimental setup: assay validation, optimization, and more. A comprehensive review of these steps can be found here, and is recommended reading for anyone seeking to integrate receptor occupancy assays into their research.

In conclusion, measuring the receptor occupancy of a given target showcases the power of flow cytometry. With the right reagents, best practices, and attention to detail, this assay can become a mainstay in your research toolkit. It extends quantitative flow cytometry to the next level, to determine a complete biological picture of how efficiently a given target is being bound. This also serves as the basis for even more fine-analysis when combined with assessment of downstream targets that the engagement of the receptor by the target antibody may affect. Phosphorylation, cell cycle arrest, and protein expression are all within reach, resulting in an even more complete picture of the process that will ultimately give the medical community a fuller understanding of how these potential therapeutics work, and when to use them. This is truly personalized medicine at its fullest potential.

For further reading:

  1. Targeted Cancer Therapies at the NIH
  2. American Association of Pharmaceutical Scientists blog 1 June 2016
  3. Green et al (2016) Recommendations for the development and validation of flow cytometry-based receptor occupancy assays. Cytometry B 90:141-149

To learn more about Measuring Receptor Occupancy With Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

5 Essential Calculations For Accurate Flow Cytometry Results

$
0
0

Flow cytometry is a numbers game. There are percentages of a population, fluorescence intensity measurements, sample averages, data normalization, and more. Many of these common calculations are useful, but surrounded by misconceptions. This primer will help you decide which calculation to use, when to use it, and how to interpret the results.

1. Staining Index

The staining index (SI) is a way to measure the relative brightness of a fluorochrome and compare it to other fluorochromes in a biologically relevant manner.

The SI is useful for ranking fluorochrome brightness on your instrument of choice. It is also a useful tool for evaluating titration data.

SI is a relative number, so it is best to focus on comparisons, and not the absolute value.

In the case of making a decision as to which fluorochrome is brighter than another, there are sites like this one at Biolegend, and this one at BD, that give a relative rank based on a standard analysis. These are useful, but if your system is significantly different from the standard, you may benefit from performing the experiments yourself.

SI was first reported in a Bigos cytometry abstract, and popularized by Maecker et al. The initial concept for the SI is a way to compare and rank different fluorochromes to help researchers make decisions as to the relative brightness of these different fluorochromes, as shown below (Figure 1).

Figure 1. Schematic and formula for the Staining Index.

Briefly, the distance is the difference between the mean (in the classical definition) of the positive minus the central tendency of the negative. This is divided by twice the spread of the negatives, as measured by the standard deviation.

The SI has several uses. Most notably is the generation of the staining index chart, such as the one shown in Table 1. By calculating the relative brightness of each fluorochrome, you get a tool to help you decide which fluorochromes to use during panel building.

In Table 1, LSR-12A is a 3-laser (405, 488, and 633 nm) system, while LSR-18A is a 4-laser (405, 488, 532, and 633 nm) system.

Table 1: Staining index comparing two different instruments.

Notice the differences in the relative rankings. Some are easy to explain. For example, AF532 is relatively brighter on LSR-18A than on LSR-12A, due to the presence of the 532 laser.

Other differences are related to sensitivity and background on these different instruments. For example, APC (which is considered relatively bright) is not as bright as the FITC signal on either instrument. The background fluorescence and spread on these two machines drive this observation.

Another use of the SI is in titration data to identify the best concentration. While it is often done by eye, plotting the data improves visualization of the calculation (Figure 2).

Figure 2: Titration data, using staining index vs. concentration.

The staining index is a useful calculation and should be in your flow cytometry toolkit.

As a note, Telford and co-workers published a variation of the staining index.

Having run these equations side-by-side, I have yet to see a difference, so choose the one you are most comfortable with and use it.

2. Data Normalization

Sometimes, data needs to be normalized. This process typically involves identifying an appropriate control population and dividing the experimental by the control.

A simple fold over background calculation could be as easy as % positive/% control cells to yield a single metric that can be taken to statistical analysis.

For expression-based calculations, use of the resolution metric (RD) is recommended.

This metric is based on Fisher’s Discriminant Ratio. Using this equation, the difference between two populations is measured and corrected for by the sum of the standard deviations. The base formula is shown here:

Using this formula, it is possible to convert measurements taken on different days to a single, unitless number that is better suited for comparisons. This calculation has been used in genomics analysis for a while, and is becoming common in flow cytometry as well.

3. Statistical calculations

There are a variety of statistical tools that you will need to use in summarizing your data and evaluating the hypothesis that the experiments were designed to test. At the end of the day, there are a bunch of numbers that have to be properly analyzed.

If the experimental plan was properly laid out, the statistical analytical methods have already been laid out as well.

The practical math behind each of the different methods is not something to worry about in this post. There are great software packages out there that can do the calculations for you.

However, it is important to be aware of several things when discussing statistical calculations.

  1. Choose the right test — You need to choose the correct statistical test based on what you are attempting to prove. These could be t-Tests, ANOVA, linear regression, or one of another handful of tests based on the distribution of the data and the comparisons being made.
  2. Set the proper threshold — The α value is the threshold that will be used to determine if your data meets the criteria to reject the null hypothesis. If the calculated P value is less than the threshold, the experiments are considered statistically significant (you reject the null hypothesis). If the P value is greater than the threshold, you cannot reject the null hypothesis.Of course, it is important to remember that the question the experiment was designed to test is important and biologically relevant. There are cases where significance is found, but the question was scientifically trivial. Make sure to state the hypothesis at the beginning and follow through to the end.
  3. Collect enough samples — You don’t want to get into a discussion with a biostatician like these two fellows — the power calculation can assist you in determining the number of samples you should be collecting to properly analyze your experiments.

4. Sorting Calculations

Cell sorting is a powerful tool in isolating interesting cells from the background cells in the system.

From a simple GFP+ sort, to a complex multicolor panel to isolate a rare circulating tumor cell, there is a lot of math behind sorting, and not just for getting the system to work. Here are some calculations that will help you answer the most common sorting questions.

How fast can I sort?

Once you realize that cell sorters are sorting droplets of liquid, things start to become a bit easier. With electrostatic cell sorters, the goal is to have one cell in one droplet, and no cells in the surrounding droplets.

This process is governed by Poisson statistics.

As shown in this figure from Rui Gardner, head of Flow Cytometry at Memorial Sloan Kettering, having 1 cell every 4 drops gives you a reasonable probability for having no cells in the leading or lagging drop.

So, the simple calculation of drop drive frequency divided by 4 will give you the maximum event rate you should strive for on the sorter.

How many cells do I need to start with? How long will it take?

Starting with the required number of cells for the downstream application, it is possible to approximate how many cells to start with so that you will end up with enough cells in the end. At the same time, we can estimate how long the run should take, barring unforeseen circumstances.

  1. Total cells needed / (frequency of population * sort efficiency) = starting population

    I like to double the starting population to account for losses in the processing process.

  2. Starting population/max events/second = time of sort (sec)

How pure is my sample? What is my post-sort recovery?

Here are three common values that can be used to characterize a sort.

Armed with this information, it is possible for you to figure out how long your sort will take, so you can plan accordingly. You can also observe how good the sort was and if you have enough cells for your downstream application.

5. Compensation

No post about flow cytometry calculations could be complete without touching on the most fundamental of calculations in flow cytometry — the calculation of the compensation matrix.

The comp matrix is essential for good flow cytometry, so that the spectral overlap from a given fluorochrome into a secondary channel is properly accounted for to ensure that it is possible to identify true signal.

As a reminder, those 3 rules are:

  1. The compensation sample should be at least as bright as the experimental samples to which the compensation will be applied.
  2. The backgrounds of the negative and carrier must be matched (no universal negative; cells-to-cells, beads-to-beads).
  3. The compensation color must be matched to the experimental color.
    1. Same fluorochrome (FITC ≠ A488, tandems must be from exact same stock).
    2. Same sensitivity (don’t change voltage between tubes).

And, as always, collect enough events. Following these rules will ensure you have consistent and correct compensation.

If you have not yet integrated these calculations into your workflow, consider where each would be useful. Some, like the SI, are very useful in the development of new panels — from titration to voltration, it makes the comparison of the different samples easy. While it is possible to plot the data and try to gauge by eye, having a number there is much easier to make a decision. When preparing for a sort, it is vital to do these calculations, even for a ballpark of how many cells you might need to start with.

Compensation is, of course, one of the most critical calculations, so make sure you provide the correct controls that meet the “3 Rules”, and let the software do the work. In the end, doing these calculations should help you with your work, as it will improve consistency and reproducibility, and ensure you have sufficient cells for your downstream applications.

To learn more about the 5 Essential Calculations For Accurate Flow Cytometry Results, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training


4 Considerations For Assessing Protein Phosphorylation Using Flow Cytometry

$
0
0

Signaling pathways are of great interest in many areas of research because dysregulation of these pathways can lead to diseases such as cancer and lupus. This also means these pathways contain potential therapeutic targets to help treat these diseases.

The power of flow cytometry is the ability to analyze millions of events in a short period of time. This allows for the analysis of dynamic cellular processes in a phenotypically defined manner.

The process of cellular signaling relies on changes in the phosphorylation state of proteins to either up or down regulate downstream processes (Figure 1).

Figure 1: The PI3Kinase signaling pathway has been implicated in cancer growth. From Vivance and Sawyer (2002) Nature Reviews Cancer 2:489-501

Initially, phosphorylation state analysis was performed by western blot — a bulk analysis method where subtleties of expression can be lost. To transition phosphorylation analysis to a flow cytometry assay, several factors needed to be optimized.

First, a source of high-quality antibodies to a very specific target needed to be produced. Second, the best way to stain the cells for specific, efficient labeling of targets while preserving the integrity of surface staining needed to be determined.

Proteins can be phosphorylated at 3 residues: Serine, Threonine, or Tyrosine. Phosphoprotein antibodies must be able to resolve a single change at the appropriate site in the presence of non-phosphorylated targets. When screening hybridomas, manufacturers must screen for both positive binding and absence of binding to the non-phosphorylated target.

The second concern was testing which protocols would be best for fixation and permeabilization of the cells to preserve the surface staining, the epitopes, and cell shape, while ensuring the antibodies were able to access the intracellular space.

If you are considering going after phosphorylation state, here are 4 important considerations that you should keep in mind when performing your experiments.

1. Choice of Instrument — while this may sound odd, there are 3 different classes of instrument on the market that can be used for cytometry-based assays. The traditional fluorescent flow cytometers, the spectral analyzers, and the mass cytometer. Each have their strengths when performing this assay.

  1. Traditional Fluorescent Flow — most readily accessible to the majority of users. There is a limited choice of fluorochromes for phosphoflow, which is impacted by the size of the fluorochrome and the autofluorescence of the cells. This limits the number of targets that can be measured in one assay.
  2. Spectral Analyzers — still relatively new to the marketplace. These systems should help improve resolution, as they are better able to handle autofluorescence of the cells. There are still limits because of fluorochrome size, but spectral analyzers can allow resolution of closely overlapping fluorochromes. As shown on this figure generated on the new Aurora from Cytek, they are able to resolve AF488 and AF532 in the presence of PE (Figure 2).
  3. Figure 2: Emission resolution possible with spectral analyzers. Information from Cytek.

  4. Mass Cytometry — it is hard to beat the power and multiplexing capabilities of the CyTOF mass cytometer. Over 30 parameters are possible, and the use of isotope labeled antibodies alleviates size constraints. Likewise, there is no autofluorescence from the cells to reduce sensitivity. The high cost of adopting a mass cytometry panel, the limited availability of reagents, and potential data analysis complexity all reduce the attractiveness of this technology, but if you are seeking to screen a large number of targets, the mass cytometer is hard to beat.

2. Choice of Fluorochrome panel design for phosphoflow assay offers some additional challenges. First, is the consideration of the size of the fluorochrome.

After making holes in the membranes, it is important that the contents do not leak out, so the holes can’t be too large. We are trying to get antibodies into the cells, so imagine adding a PE molecule that is larger than a typical antibody and trying to get that into the cell.

That is why cyclic ring compounds like Fluorescein, Texas Red, and the Alexa type dyes are popular. These are relatively small (usually under 1000 g/mol), so don’t impact the size of the antibody as much as the larger fluorochromes do (Figure 3).

Figure 3: Size of some common fluorochromes including Fluorescein, Alexa 532 (carboxylic acid, succinimidyl ester shown), Phycoerythrin, allophycocyanin, and a typical antibody.

Once the fluorochrome choice is made, surface staining must be considered. This will be impacted in 2 ways.

First, the choice of phospho-antibody conjugate will remove some fluorochromes from consideration. Second, optimal fixation conditions for the phospho-antibody may negatively impact the surface stain fluor. Fortunately, there is a guide published by Cytobank to help make this decision.

3. Choice of Protocol — fixation and permeabilization protocols are critically important for success of these experiments. The same link from Cytobank can help with protocol choice. The choice of fixatives and the order that the fixatives are used is critically important to get right and optimize (Figure 4).

Figure 4: Effects of different fixation protocols on the surface staining of different clones and fluorochromes. From Perez et al., (2005) Curr Protoc Cytom. Unit 6.20.

4. Choice of Analysis — when done properly, this assay can generate a great deal of information quickly, and it is critical to find ways to present it both graphically and statistically. This can be done using heat-maps or other tools.

Figure 5: Heatmap analysis and clustering of 6 different phospho-Ab, either at basal or one of 5 different stimulation conditions. From Irish et al., (2004) Cell 118:217-228.

If you are interested, another way to look at this is a fold change. Jonathan Irish recommends using a transformed ratio, and in the simplest form it is:

log10(MFIstim) – log10 (MFIunstim)

Other data transformations exist, and these are discussed here.

Of course, the standard considerations for any flow cytometry experiment exist, such as appropriate samples, proper simulation, the necessary controls, and the protocols for setting up and running the instrument. Phosphoflow adds a level of complexity to this process, but it is not insurmountable.

If you are seriously considering adopting this assay into your research, it may be good to read and take the U937 challenge. This challenge is a perfect training tool to see how well you can perform and analyze phosphoflow data.

If you want more training, the Irish lab at Vanderbilt has put their training material on the web for everyone to use. You can also find some videos at the Nolan lab at Stanford.

For those working in the signaling field, having the ability to take a sample and phenotypically identify it, while knowing what is happening inside the cell to the target molecules of choice, opens up a host of new opportunities. These assays are amenable to high throughput setup, meaning that biologically relevant outcomes in pre-clinical drug discovery can be measured directly. All told, with a little forethought, some careful planning, and validation, phosphoflow assays are within your reach.

To learn more about the 4 Considerations For Assessing Protein Phosphorylation Using Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

How to Perform Doublet Discrimination In Flow Cytometry

$
0
0

What is doublet discrimination?

You are probably familiar with the term, “doublet discrimination” or “doublet exclusion”, and have likely included this flow cytometry measurement into at least some (if not all) of your gating strategies.

Even though you may utilize this important gating strategy, you may not have had the chance to delve deeper to explore exactly what doublets are and why it’s critical to exclude them. This article aims to do just that.

What is doublet discrimination?

The first aspect to understanding how a doublet exclusion gate works is to define what a doublet is. Most straightforwardly, a doublet is a single event that actually consists of 2 independent particles.

The cytometer classified these particles as a single event because they passed through the interrogation point very close to one another. In other words, the particles were so close together when they passed through this laser spot, that the instrument was incapable of distinguishing them as individual events or particles.

Given how good cytometers are at measuring single cells, how would a situation arise in which the instrument would not be able to do this? The answer to this question has a whole lot to do with cytometer electronics, and how they classify cells and other particles as events.

You may recall that an “event” is the fundamental unit of measurement in flow cytometry, and is defined by what we call a pulse. Pulses occur as a cell pass through the laser beam spots, and this passage generates signal from the detectors. This signal is monitored and processed by the cytometer’s electronics, and is the origin of the “A”, “H”, and “W” pulse parameters we know and love (Figure 1).

In the absence of a cell in a laser beam, the output of the detectors is not 0 but rather a low and constant level — a background “hum,” if you will — and is interpreted by the cytometer electronics as what we call the baseline.

Figure 1. Anatomy of the voltage pulse.

In order for something to actually be considered an event, a few things have to happen. First, the detectors have to generate signal.

Usually, this occurs when a cell or other particle passes through the laser beam. However, signal can also be of a more displeasing nature, including an extraordinarily high PMT voltage (effectively amplifying the baseline “hum” generates events itself), or perhaps errant laser light escaping into the detector.

Second, this signal needs to cross what we call the “threshold” in the “trigger” channel. The threshold is the fine line between being an event and not being an event. Any pulse that fails to cross this line in the trigger channel is not considered to be relevant, and is ignored by the electronics.

What happens at the threshold is similar to what happens when you blow bubbles with a wand, like we did as kids. If you blow only slightly on the wand, a bubble may pucker but not enough to actually escape as a discrete unit. However, if you blow strongly enough (i.e. the signal is intense enough), that bubble will “pass the threshold” and escape into a fully fledged one (a pulse).

Third, the signal needs to drop back down to baseline. It is based on this third criterion that a doublet can occur. If 2 cells pass through the trigger laser so close together that the pulse does not fall back to baseline between them, the cytometer assumes their 2 pulses actually belong to one particle and will classify them as such: one, single, large event (Figure 2).

Figure 2. Voltage pulse of a doublet event.

Why perform doublet discrimination?

These doublets can have some negative effects on results and data. Most critically, they wreak havoc when sorting. Failure to include doublet exclusion in your gating strategy is a sure way to end up with poor purity.

When 2 events, one a target event for sorting and the other a non-target event, comprise a doublet, BOTH will be sorted and purity will suffer. For sorts that require extra stringency in purity, 2 individual doublet exclusion strategies can be used, which we will discuss shortly.

The importance of excluding doublets is certainly not restricted to sorting. When identifying subpopulations for analysis, the presence of doublets can impact population frequency, which can in turn impact how the data is collected. If a doublet consists of a CD4+ cell and a CD4- cell, the event they comprise will be classified as CD4, skewing the CD4+ percentage.

Additionally, doublets can make for some very strange staining patterns. If a doublet consists of one CD4+ cell and another CD8+ cell, you may mistake this data artifact with the presence of a rare CD4+CD8+ population.

Finally, including a plot that serves as a doublet exclusion can also give you a sense of both how sticky your sample is, as well as the general quality of your sample preparation. For example, lots of doublets may indicate poor enzymatic digestion.

How to perform double discrimination

So, how do you actually identify and exclude doublets? The answer to this question can be gleaned from taking a deeper look at pulses. Let’s revisit what a doublet pulse looks like in comparison to that of a single particle (Figure 3).

Figure 3. Voltage pulses for single (left) and doublet (right) events.

Notice the differences between the doublet pulse and the single particle pulse: both the area and the width of the doublet pulse are larger than the single cell’s (because two cells spend longer passing through a laser beam than one cell) but the heights of the two pulses are very close, if not identical. We can take advantage of these observations to parse out which pulses belong to doublets and which belong to true single events in the data set.

There are a few things that we need to do to accomplish this. First, we have to choose a channel in which to compare area, height, and width measurements to each other. The only requirement for this channel is that it should be scaled linearly.

The magnitude of difference in any pulse parameter between a doublet and single event is not large, and the resolution of linear scale is necessary to be able to accurately identify doublets to exclude. This requirement precludes most fluorescent parameters, which are typically scaled logarithmically, leaving forward and side scatter (which are coincidentally also nice and bright signals) as the best choices.

One exception is when performing cell cycle analysis by DNA measurement. In this case, the DNA dye measurement will be scaled linearly, and this channel is often the best choice for a doublet exclusion.

Next, we need to set up plots to make the determination. This is done in a variety of ways, and the method that is chosen is often based on personal preference. The most typical plots are based on forward scatter, as the chart below indicates, but side scatter can also be a good choice.

X-axis Y-axis
FSC-Area FSC-Height
FSC-Area FSC-Width
FSC-Height FSC-Width

The final step is to identify doublets, draw a region around the single cells to exclude those doublets, and gate all subsequent analysis on this region. Depicted below are some typical plots and where doublets can be found (Figure 4).

One important tip: if you are using BD “digital” FACSDiva instrumentation, the pulse width parameter is not really measured, but is calculated from the pulse area. Therefore, in order to ensure an accurate doublet exclusion gate, be sure to calibrate the Area Scaling Factor associated with the doublet discrimination parameter if you intend to use the width pulse parameter for doublet exclusion.

Figure 4: Examples of different methods for excluding doublets

You may also be wondering whether implementing a tighter, more restrictive forward and side scatter gate may preclude the need to include a formal doublet exclusion gate. While some doublets can be identified by a forward and side scatter plot, not all can, especially when cells are irregularly shaped or the sample preparation is heterogenous, so it’s not worth the risk.

One final point. Don’t necessarily conclude that the presence of doublets in a sample reflects poor sample quality. Doublets are inevitable; even the best cell preparations contain them. Their presence is a function of a random distribution and, considering that flow cytometry and cell sorting are all about random distributions, inevitable. Some cells will just end up close enough to one another to produce a doublet, even in a suspension that consists entirely of single cells. The faster that cells are pushed through the system and the more dense the sample, the higher the frequency of doublets. So, don’t fret — just gate them out.

That’s about it! We hope you now have an appreciation of what doublets are, in terms of the instrumentation, and how and why to make sure they are excluded when analyzing a data set.

To learn more about How to Perform Doublet Discrimination In Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

5 Best Practices For Accurate Flow Cytometry Results

$
0
0

How do you follow best practices in flow cytometry to improve reproducibility?

Reproducibility is in the science spotlight these days. With the growing body of evidence showing how much translational research is not reproducible, funding agencies and journals are taking note.

Flow cytometry, as a technique, has changed and developed over the years, with researchers constantly evolving and evaluating best practices based on technological developments.

However, in the dark recesses of old lab notebooks, there still exist the time-worn protocols of yesteryear that come back to haunt the next generation of graduate students. The lure of getting a head start by using an already written protocol drives them to perform experiments using obsolete and outdated methods, dooming their research to the bin of irreproducible results.

It’s time to shine the light of modern cytometry on these bygone practices, and in doing so, provide tools for researchers to improve their experiments with current best practices.

1. Manual Data Compensation

In the days of analog flow cytometers, data was processed (transformed) before it was displayed and saved in the file.

Researchers had limited tools available to them so, experiments were compensated by manipulating a series of sliders to remove spillover from secondary channels. This became a bit of a guessing game, as shown in Figure 1.

Figure 1: Manual data compensation is a trial and error method.

The best way to perform proper compensation is to take advantage of automated compensation.

In most acquisition and analysis software packages, there is an automated compensation algorithm. To use these built-in algorithms, the user must collect a series of single-color controls, identify the positive and negative populations for each control, and let the computer do the heavy lifting (Figure 2).

The critical consideration for robust automated compensation is that the controls meet 3 criteria. These have also been called the “3 Rules of Compensation”.

  1. The control must be at least as bright as the experimental sample the compensation will be applied to.
  2. The backgrounds of the positive and negative samples must be identical.
  3. The control must match the experimental fluorochrome. This means the tube must be acquired at the same voltage and the exact same fluorochrome has been used.

Within these 3 rules, are some inherent assumptions.

  1. That the fluorescent signal is within the linear range of the detector.
  2. Sufficient number of events are collected.
  3. The controls were treated identically to the experimental sample.

If the controls meet these 3 rules, automated compensation will be accurate.

Figure 2: How single stained controls are used to determine the compensation.

2. Isotype controls

After antigen exposure, B cells undergo class-switch recombination, which results in the constant region of the heavy chain being swapped to a different type (e.g. from an IgM to an IgG1).

The variable region remains unchanged, so the biological result of CSR is that the antibody interacts with different effector molecules.

In flow cytometry, one of the earliest controls used to detect background binding and determine positivity was termed the “isotype control”: an antibody of the same isotype as the antibody being tested, but with binding specificity to an irrelevant antigen.

The theory was that 2 antibodies of the same isotype would show similar non-specific binding to the target cells, and thus the background binding on the cells could be identified. This requires several assumptions about antibody binding including:

  1. The affinity of the variable region of the isotype control has similar affinity for secondary targets as the target antibody.
  2. There are no primary targets for the isotype antibody to bind to.
  3. The fluorochrome to protein ratio is the same on both antibodies.

For example, consider the mouse IgG2a κ clone MOPC-173. This clone was first produced in the early 1970s, and the variable region has an “unknown” target.

Reading the technical specifications on this clone, vendors claim it is routinely tested against a suite of normal cells from various organisms to show it doesn’t bind.

Who is to say the target is not on a rare subset of cells that have not been discovered because they were always excluded when this reagent was used?

As for the F/P ratio, it is well-known that different antibodies, even of the same isotype, have different binding capacities for fluorochromes and one may never know if the F/P ratios are matched.

One potential exception to this is in the case of large protein-based fluorochromes and their tandem derivatives. Due to steric issues, most of these conjugates have an F/P ratio of 1:1, but even that is not universally guaranteed.

These limitations lead us to discourage the use of an isotype control, as it provides no additional information and may even lead to erroneous conclusions. For those looking for more discussion, here is a list of references to read:

  1. Keeney et al. (1998) Cytometry 34:280-283
  2. Baumgarth and Roederer (2000) J Immunol Methods 243:77-97
  3. Maecker and Trotter (2006) Cytometry A 69A:1037-1042
  4. Hulspas et al. (2009) Cytometry B 76B:355-364
  5. Andersen et al. (2016) Cytometry A 89:1001-1009

There is no perfect control for nonspecific binding. Rather, it must be procedurally minimized at several levels, including using high-quality antibodies, proper blocking (see Anderson for excellent experiments on this topic), titration to ensure the appropriate concentration of antibody, and the use of proper controls such as the FMO (discussed below), biological controls, internal negative populations, and more.

3. Absence of the Fluorescence Minus One (FMO) control

If the isotype control can’t be used to set positivity, the question is, “How can a researcher do it?”

The answer is that there is no one specific control that should be relied upon to determine positive from negative events.

Rather, controls addressing spectral spreading of panel fluorochromes into the channel of interest, known positive and negative control samples, stimulation controls, and more need to be consulted.

The Fluorescence Minus One, or FMO, is a control that addresses the loss of sensitivity in a given channel.

The FMO is critical when accurate discrimination is necessary, such as for rare events or dimly expressed markers. During the panel development phase, it is recommended to test all possible FMOs and keep those that are critical to determine the proper gate placement.

Figure 3 shows how a typical FMO is used to set a gate. In this experiment, PBMCs were stained with 5 fluorochromes (DAPI, FITC, PE, Cy5.5PE, and APC), and acquired on a flow cytometer with 405, 488, and 633 nm excitation sources.

On the far left is the unstained control, and the fully stained sample is shown on the right. The FMO control is in the middle, stained with all the fluorochromes except for PE.

Using an unstained control to determine positivity results in the red dashed line, and it appears there are PE positive cells in the FMO. However, since there is no PE in this tube, the signal must come from somewhere else, such as spillover spreading.

This is how the FMO control helps establish the positivity in the full stained sample, and addresses the spread of the data due to the fluorescence spillover into the channel of interest.

Figure 3: FMO control for a 5-color experiment.

4. Not optimizing the PMTs

Setting the voltage on a PMT can be a daunting task.

In the days of analog cytometers, people were taught to put the negative population in the first log decade. So the researcher would draw a quadrant on the plot, and adjust the voltage to put the negatives in the bottom left without giving it any further thought.

But, what really makes a good voltage? A good PMT voltage should meet the following criteria:

  1. The dimmest cells are in a region where electronic noise (EN) contributes no more than 10-20% of the variance
  2. The positive signal is on scale and in the linear region of the detector

With digital cytometers, and advancements in signal processing and data transformation, it became possible to more fully appreciate the true sensitivity of the PMT. At this time, the concept of determining an optimal voltage started to take hold in the cytometry community.

This has been formalized for BD instruments using the Cytometry Setup and Tracking protocols.

However, there is a simple way to do this on other machines, that was published by Maecker and Trotter in 2006, and is termed the “peak 2” method. In this method, a dim particle is run over a voltage series, and the spread of the data, as measured by CV, is plotted against the voltage, generating a curve that looks something like the one in Figure 4.

Figure 4: PMT optimization using the peak 2 method.

This curve shows that at low voltages the CVs are very broad. As the voltage increases, CVs decrease until an inflection point is reached and the slope of the curve changes.

This inflection point represents the point where increasing voltage does not decrease the CV, and is the best starting point for setting voltage. This can be fine-tuned for a given fluorochrome with a voltration experiment, but that’s a subject for another post.

By optimizing the PMT voltage, the issues of incorrectly setting voltages are eliminated.

In this methodology, the researcher applies the voltage and acquires the sample. Exactly where the negatives fall is less critical.

5. Lack of experiment-specific QC protocols

Many researchers consider Quality Control (QC) the domain of the team that is supporting and maintaining the instrument. This is true for one aspect of the QC process, ensuring the instrument is performing consistently.

This QC is usually performed in the morning, however, instrument status may change over the course of the day.

How many researchers add quality control protocols for their experiments to catch these variations?

Experiment-specific QC can be a very simple addition to the experimental workflow, but provides an invaluable resource in determining how the system is behaving when the experiment is performed and how well the staining process was performed.

These two controls, using a beadset for instrument performance and a reference control for staining variation, give the researcher an added level of confidence in the performance of instrument and protocol.

An important thing to remember about any QC protocol is that not only is it performed, but the results are written down somewhere. The adage of “if it isn’t written down, it didn’t happen” is especially true here.

Take, for example, the data in figure 5. Here, the researchers optimized the instrument and used a bead (the 6th peak of the Spherotech 8 peak beadset) to establish target values and acceptable variation.

Before any samples were collected, the investigators ran a peak 6 bead, and adjusted voltages to achieve a target value established during experimental optimization. You can read about this process here.

To assess how well the instrument performed over time, the data (in this case PMT voltage) was analyzed using a Levey-Jennings plot. This analysis plots the data over time, and adds lines to indicate the running average, and +/- 1 and 2 standard deviations from the mean.

A representative Levey-Jennings plot is shown in Figure 5. This plot is interpreted by examining the position of each new data point. Should a point fall outside the quality control level (here that is +/- 2 SD), it is an indication to troubleshoot the issue before collecting actual samples.

Figure 5: Tracking QC in a Levey-Jennings plot. If it isn’t written down, it didn’t happen.

There you have it, 5 lessons, from the trenches of flow cytometry, looking at important aspects of how best practices have changed over time, which practices need to be adopted, and which are outdated. Put those old, coffee-stained protocols away and take advantage of the best practices for digital instruments to write new and improved ones (coffee stains optional). Your data will thank you.

To learn more about the 5 Best Practices For Accurate Flow Cytometry Results, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Use This Preparation Guide For Accurate Flow Cytometry Results

$
0
0

A colleague related the following conversation/mini-rant that happened in their facility just the other day. The names have been changed to protect the innocent.

New User #1: “Hi, I want to sort and run a 10-color panel on preparation of dissociated rat prostate tumor cells. Can you tell me what I need to order so we can do this next week?”

Wise Flow Director: Mental facepalm. “Can you be a bit more specific?”

New User #1: “I need to look at all the different cells in this prostate tumor and sort them for RNAseq. What colors do I need?”

Sadly, this scenario is repeated more frequently than not around the world. Let’s see how this might have played out with a more informed user.

New User #2: “Hi, I want to sort and run a 10-color panel on preparation of dissociated rat prostate tumor cells. I know what antibodies I need, but I don’t know the instrument configuration. I would like to do the sort next week.”

Wise Flow Director: “Great, here is our sorter configuration. I’m happy to help you design your panel, but next week might be a bit early to plan for the experiment.”

New User #2: “But, we have an <insert excuse here> due next week.”

Notice, the user has started the process of experimental design by knowing what antibodies are needed for the assay.

When designing an experiment, the first step is to know what the experimental question is, and which antibodies are needed to answer the question.

The second step is to know the instrument characteristics. There are 3 different components of the instrument configuration that the user needs to know.

1. Excitation Sources

The excitation sources dictate which fluorochromes can be excited.

Another feature that is important to know is if the lasers are collinear (sharing the same light path), or spatially offset. This difference will have an impact on what fluorochrome combinations are feasible.

Notice the spectra Cy5.5-PE and Alexa 700 below (Figure 1). Cy5.5-PE is excited by the 488 nm laser, and Alexa 700 by the 633 nm laser. However, both have similar emission profiles, so there is overlap between them.

These 2 fluorochromes would be poor choices for a collinear arrangement, but work quite well on a spatially offset layout, where they are not excited at the same time.

Figure 1: Laser arrangement impacts fluorochrome choice.

2. Detector Arrangement

This is the information of what detectors are paired with each excitation source, and the specific wavelength of light each detector is measuring.

Photomultiplier tubes (PMTs) can convert any photon that hits the photocathode to a photocurrent, so flow cytometers use filters to control the specific wavelengths a given PMT encounters. An example instrument configuration is shown below (Figure 2).

Figure 2: An example of an instrument configuration showing excitation sources (left), emission filter ranges (middle), and common fluorochromes (right).

3. Spillover Issues

This information is less readily available, but is extremely useful when developing a polychromatic panel. This analysis identifies which fluorochromes contribute high levels of spectral spillover into other detectors.

Spillover reduces sensitivity of measurements in that specific detector.

One such analysis is based on the paper by Nguyen and co-workers (2013). This analysis outputs values representative of the relative amount of spillover for each detector/fluorochrome pair, but an easier way to look at this is using a heat-map type analysis, as shown below (Figure 3). The more red the color, the greater the contribution of error for that fluorochrome into the detector.

Figure 3: Spectral Spillover Matrix to help design panels by avoiding the red regions of this chart.

The resulting chart looks a bit intimidating, so for convenience, if one sums across the chart, a number representing the amount of spectral spillover a given detector receives. Summing down the columns provides a measure of the amount of spillover a given fluorochrome contributes to a panel. That is much simpler to understand, as shown in Figure 4, where these values were calculated and arranged highest to lowest.

Figure 4: Summary of data from Figure 3.

Now that the information about the fluorochromes and antibodies is known, the next step of panel design is to pair fluorochromes with antibodies such that the brightest fluorochromes are paired with poorly expressed antibodies, and dimmer fluorochromes with more highly expressed antibodies.

Several fluorochrome brightness charts are available, including one by BD and another by Biolegend.

Of course, don’t forget a viability dye, and please consider using a dump channel.

Once all that is worked out and the reagents are ordered, there is still more work to do. One can’t just throw the antibodies together and stain the cells. For the best success, some optimization must occur. That’s why our new user cannot reasonably expect to do the experiment next week. Of course, there are several steps in optimization.

1. Sample Prep

The first user in our SRL drama planned to make single cell suspensions from a prostate tumor.

It is critical to optimize the sample preparation procedure even before ordering antibodies.

Successful flow cytometry relies on a good single cell suspension. For many samples (water, blood, bone marrow, and spleen), this is straightforward. Solid tissues and adherent cells require more optimization.

Once the sample preparation looks good under the microscope, it’s time to give the SRL team a call and get some time on the instrument of choice — sorter or analyzer — so that the quality of the preparation can be assessed.

One recommendation when doing this is to co-stain the cells with a cell-permeant nuclear dye, like DRAQ5, and cell-impermeant nuclear dye, such as DAPI.

This combination will help assess the cell preparation in 2 ways. First, it will show how many dead cells are in the sample. Some preparation procedures are very harsh on the membrane and the cell-impermeant dye will show cells that have been damaged.

Figure 5. The use of a cell-impermeable viability dye, such as DAPI, in conjunction with a cell-permeant dye, like DyeCycle Ruby, helps clarify where live undamaged cells are in a tissue prep. Optimize dissociation conditions to maximize this population.

The cell-permeant dye will help assess the total cells and how many may be doublets.

One other thing that this test analysis can assist with is validating the instrument nozzle size. It is recommended that the nozzle be 4-5 times the size of the cells. If the cells are too large for the nozzle, the result will be stream fanning, and poor sort yields and recoveries.

Running a test sort on this optimization sample is a good way to make efficient use of user material.

2. Titration

High concentrations of antibody cause a loss of sensitivity by increasing non-specific background binding.

It is always good to perform a titration experiment to identify the best concentration for the cell type and assay. A typical titration curve is shown below (Figure 6).

The blue line represents the vendor recommended concentration, and as seen from these curves, only Ab#3 is recommended to be used at the vendor concentration, while Ab#1 and #2 can be used at approximately ¼ the recommended concentration.

Figure 6: Titration results from 3 different antibodies. The Staining Index, as calculated based on the Telford method, compares the separation between positive and negative. The best SI is chosen for staining.

3. Voltage optimization

Identifying the best voltage to run the instrument is the third step of optimization. This has been discussed in detail previously.

To summarize, single-stained cells (at the titrated recommended concentration) are run across a voltage range and the SI calculated. The optimal voltage is where the following criteria are met.

  1. The positive signal is on-scale
  2. The positive signal is in the linear range of the PMT
  3. The best SI is obtained

That is a lot of work to get to a point where one can begin to perform the experiment.

Hopefully, it is now clear why just walking into the flow cytometry facility and announcing that a sort must be done next week is a bit fatuous.

Don’t forget to estimate how many cells will be needed to perform your downstream analysis. A quick back-of-the-envelope calculation can give you an estimate of how many cells you need to bring to the instrument.

For example, Table 1 shows some numbers assuming 100,000 cells are needed for a downstream application. The frequency of the population, the recovery from the sorter, and recovery from sample processing all factor into how many cells an investigator needs at the start of the experiment.

The time to sort is an estimate of how long it would take to sort the cells at different nozzle sizes (which relates to how fast the sorter can run, and thus the droplet generation frequency).

Table 1: Relationship between cells needed for downstream application, frequency of target population, and expected recoveries from sample processing and sorting, and the time to sort. This does not include setup time on the instrument.

So, as can be seen, there is a lot of preparatory work that needs to be done before the first experiment can be attempted. Each step builds upon the last step, and extends where the assay is going. This doesn’t include testing the final panel, identifying the best controls for gating, and monitoring the performance of the experiment. Be prepared for some trial and error in this process, and don’t expect perfect results the first time around.

In conclusion, an educated user is a good user, and makes the SRL staff’s job that much easier. The partnership between investigator and SRL staff is a rewarding one, when both parties work together to achieve the ultimate goal of generating excellent data and sort results that help answer the biological question being tested. A little knowledge, some planning, and careful validation goes a long way to this end.

Overheard in the sort lab:

New User #3: “My PI sent me over here with this tube, and he wants the red cells.”

Wise Flow Director (to self): “Here we go again…”

Note: These are actual conversations, the names have been changed to protect the innocent (or guilty).

To learn more about how to Use This Preparation Guide For Accurate Flow Cytometry Results, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

5 Essential Beads For Flow Cytometry Experiments

$
0
0

Flow cytometry is designed to measure physical and biochemical characteristics of cells and cell-like particles using fluorescence.

Fundamentally, any single-particle suspension (within a defined size range) can pass through the flow cytometer.

Beads, for better or worse, are a sine qua non for the flow cytometrist. From quality control, to standardization, to compensation, there is a bead for every job. They are important — critical, even — for flow cytometry.

Beads can do much to enhance flow cytometry, so without further ado, let’s delve into the world of beads.

1. Quality Control Beads

Starting at the top are the quality control beads. These beads represent the first line of defense for any facility to determine if their instrument is working within the specifications set by either the vendor or, more importantly, the facility itself.

Over time, each instrument’s quirks and idiosyncrasies are revealed as quality control beads are run and the data analyzed.

It is critical to make sure to analyze the trends in QC beads so that issues can be spotted before they become a real problem.

Before the days of automated QC protocols, each facility had to establish their own QC program. My go-to beadset at the time was the AlignFlow series from Molecular Probes (now ThermoFisher). This series required a different bead for optimization of each laser. Luckily, there are better options now.

In an effort to automate QC on the BD line of digital cytometers, Cytometry Setup and Tracking (CS&T) was introduced to DIVA in the mid-2000s. This protocol, and the associated beads, provided an automatic way to collect QC data and, more importantly, track it.

This was a huge boon to the core facilities trying to keep on top of instrument QC. However, it was limited to the BD instruments, and if the instrument was one of their Special Order Research Products (SORP), or had a non-standard filter arrangement, CS&T could have difficulties.

Figure 1: QC reports using the CS&T system.

Recently, another automated QC system that can be used on many flow cytometers was released by Cytek. Their program, and associated beads, is called the QbSure system.

In this case, the software makes measurements of both Q (detector efficiency) and b (background noise), as well as an additional metric called R.

The R value helps to characterize the resolution of a system, and a lower R is better.

Figure 2, taken from the Cytek website, shows R values for 3 different instruments and the resulting scatter plot for a CY7-PE stained sample with low levels of expression. Notice how, as the R value decreases, the separation between negative and positive increases.

Figure 2: The effect of R value on detection of a dim signal. From Cytek website.

2. Compensation Particles

Let’s face it, compensation is a necessary pain. Following the 3 rules of compensation and using automated algorithms is best practices when it comes to this process.

The carrier that escorts the fluorochrome to the intercept is not important, as long as the positive and negative populations have the same autofluorescence.

So, it comes down to convenience. If you’ve got abundant cells, say from a spleen, and the antigens are all reasonably expressed, using cells is fine. However, in the case where you don’t have extra cells, you have dim antigens, or are looking for targets on rare cells, using the sample cells for a control is less appealing.

Enter the antibody capture bead. Sold by many vendors under different names, these are plastic beads that are coated with an antibody that can bind some part of another antibody.

It could be the light chain or the heavy chain, or it could be species-specific. You can even use Protein-A or Protein-G beads for antibodies that don’t work with other beads.

There are several advantages of compensation beads. First, the beads bind all the antibody in the solution, resulting in a bright signal with low CV. Compare the plots in Figure 3 of FITC labeled cells (left) or beads (right).

It is much easier to cleanly identify the positive population with the beads. Additionally, this ensures that the first rule of compensation is met.

Figure 3: Comparison of beads vs. cells. Cells (left) or antibody capture beads (right) were stained with the same FITC conjugated antibody. The center plot shows the overlay of the cells and beads. The black dashed line shows the lower limit of the bead positive signal. It is clear from these plots that it is easy to gate the positive from the negative, and that the positive signal is at least as bright as the experimental sample will be.

Of course, the second rule is easy to meet, as long as one identifies a positive and negative bead in each control.

Avoid the desire to use a universal negative or, worse yet, cells as a negative.

As shown in Figure 4, if unstained cells were used as the negative population, the compensation would be incorrect, as the cells have a different background fluorescence than the beads.

Figure 4: Identification of the proper negative population is critical for accurate compensation.

Finally, using beads ensures that the controls meet the third and final rule of compensation: that is, that the compensation control and experimental sample must be collected under identical conditions. This means the same treatment (e.g. fixed or unfixed), with the same antibody at the same sensitivity — don’t touch that voltage!

3. Counting Beads

If an absolute count of cells is needed, there are a few options. First, is to have access to a volumetric driven system (MACSQuant, Accuri, Attune, etc.), which has a very accurate measure of the volume of the sample that is injected into the system.

Barring that, the next best thing is a counting bead. These are beads that have a very precise, known count so that it is possible to calculate cells using a proportion.

They come in 2 different preparations. The first preparation is a tube containing a precise number of beads. A defined volume of sample is added to the tube. The sample is run on the flow cytometer and the absolute count of the sample is:

The second preparation of beads comes already in solution and the researcher adds a defined amount of beads to the tube of interest. Knowing the amount of beads added, one can do a similar calculation to determine the concentration of cells in the tube.

It is important to note that in both of these cases, accurate pipetting is critical.

4. GloGerm Beads (or YG beads)

Biosafety of cell sorters is an important issue for sort operators. ISAC has released guidelines for sorter biosafety, and the NIH campus has adopted specific rules as well.

If your facility doesn’t have any rules in place, it is a good time to talk to them — and your Biosafety office — about these best practices.

One important component of biocontainment validation is a test to ensure that the engineering controls are working correctly.

This can be done using a surrogate for cells (a bright bead) and an air sampler.

The initial work was done using a particle called the “GloGerm bead” (and yes, you can buy them on Amazon.com). These beads are highly fluorescent under blacklight, and are great to teach about hand washing, aseptic technique, and the spread of disease by physical contact.

Early work in characterizing the efficiency of engineering controls used these beads. Unfortunately, they had 2 downsides:

  1. They needed to be washed extensively from the solution, these beads came in to get them into an aqueous solution.
  2. The beads had mixed sizes and could bind to dust particles.

Enter the Polyscience Yellow-Green (YG) bead, which are a highly fluorescent and uniform particle available from 0.5 µm to 10 µm.

First discussed at the ISAC XXII International Conference, by Hank Pletcher and Jonni Moore from the University of Pennsylvania, this bead and a collection system from Environmental Monitoring Systems offer an inexpensive and easy-to-use system for monitoring biocontainment of a cell sorter.

At the heart of this system is the Cyclex-D cartridge, which is a sealed container that attaches to a pump (via tubing). As air is drawn into the Cyclex-D cartridge, it passes over sticky coverslip, and any particles in the air stick to this coverslip.

After sampling, the cartridge is opened and the coverslip inspected under the microscope for the presence of any beads.

Figure 5: Results of YG testing of a FACSAria. (A) Test Failure to demonstrate Cyclex-D cassette worked. (B) No Failure with AMS on. (C and D) Two independent tests of a 10’ failure with the AMS on, no beads were detected on either coverslip.

In testing the containment, 200 liters of air (at a rate of 20 liters/minute) is sampled. Since the Cyclex-D cartridge is small, it can be placed at any distance from the point you wish to sample.

The Cyclex-D system is a wonderful system to use for containment validation, and strongly recommended for all cell-sorting facilities.

5. Standardization beads

This is a broad category of beads that can be used for a variety of different techniques. One of the most common is the determination of the number of antibodies bound to the cell.

If using PE labeled antibodies, well-characterized beads can be used, since the F/P ratio for PE is generally 1:1, due to steric constraints.

If the antibody is labeled with another reagent, the Quantum Simply Cellular beads are a great option.

This beadset has 5 beads, 1 unstained and 4 with increasing numbers of antibody binding sites. The beads are labeled and run on a flow cytometer allowing for the generation of a standard curve, as shown below:

Figure 6: Standard curve generated using the Simply Cellular beads to determine the number of antibody binding capacity (ABC). The dashed lines represent the 95% confidence level.

Using a regression equation, it is possible to use the MFI of the unknown sample, and calculate the ABC. Thus, it is possible to determine the antibody binding sites on a target cell.

This technique can be useful in experiments like receptor occupancy. Another way to use this technique is to standardize the ABCs on a target of interest across multiple experiments.

Beads are a very useful item in the flow cytometrist’s toolkit. Each class of bead has a different use for improving the understanding of how the instrument or the experiment is performing, and care must be taken not to over-interpret the bead results. By using beads, troubleshooting experiments and instrument issues becomes a breeze.

When the PI growls

When the paper’s rejected

When the Fortessa is down… again

I simply remember my favorite beads

And then I’m experimenting again

To learn more about the 5 Essential Beads For Flow Cytometry Experiments, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

How To Achieve Accurate Flow Cytometry Calcium Flux Measurements

$
0
0

Most flow cytometry experiments work with antibodies conjugated to a fluorochrome for some variation on immunophenotyping. However, any fluorochrome that is excited by one of the available excitation sources, and emits within the range of the detectors, can be incorporated into an experiment.

One of the great pleasures of the past was leafing through the Molecular Probes handbook, seeing what fluorescent dyes had just been released, and thinking of possible applications for them. The classic example of non-antibody directed fluorochromes are DNA-binding dyes like PI, 7-AAD, and Hoechst, but there are many others.

Dyes exist for the detection of everything from large nucleic acids to reactive oxygen species, and from lipid aggregates to small ions. Concentrations of physiologically important ions such as sodium, potassium, and calcium can be important indicators of health and disease.

Calcium ions play an especially critical role in cellular signaling. As a signaling messenger, calcium is involved in everything from muscle contractions, to cell motility, to enzyme activity.

Cells tightly regulate calcium to ensure that the cytoplasmic concentration of Ca2+ is in the 100 nM range.

Mechanisms of calcium homeostasis include sequestering Ca2+ in different organelles and proteins, as well as actively secreting excess calcium from the cell.

Clearly, a probe to monitor calcium dynamics in real time would be valuable.

Fortunately, there are several different fluorochromes available for measuring changes in calcium levels (flux) by flow cytometry. A complete list can be found here.

The first Ca2+-responsive fluorochromes were published by Tsien in 1980 and, as described in the Molecular Probes Handbook, were later improved by Haugland.

Modern flow cytometrists have 2 classes of dyes available to them: those that respond to an increase in calcium, and those that respond differently to both free calcium and a lack of calcium (ratiometric).

When preparing for a calcium flux experiment, there are a couple of things that need to be considered for designing the experiment.

  1. What excitation sources are available for use? Most of the common calcium-sensing fluorochromes are excited off a 355 nm, 405 nm, or 488 nm laser. If the plan is to couple the calcium flux assay to immunophenotyping in an effort to see which cells are responding to a given stimulus, choosing a calcium-sensing dye is an important decision.
     
    With the increased presence of UV lasers on systems (due in part to the brilliant UV dyes), the use of UV-excitable Indo-1 is a great choice, allowing for phenotyping off of other lasers.
  2. Sensing or Ratiometric Calcium dye? The advantage of using a ratiometric dye — a dye whose excitation or emission peak changes in the presence of calcium — is strongly recommended. This ensures that the dye loading step of the assay is less critical than for the dyes that only respond to the presence of calcium.
  3. Temperature control? How critical is it to have biologically relevant temperatures for the response? Although cells will flux at room temperature, the kinetics are different at 37 ℃. There are several ways to achieve temperature control for flow samples. Necessity is the mother of invention, and using tools common to fans of the TV show MacGyver, it is pretty easy to create your own temperature control system using an aquarium submersible pump, an old water bath, some plastic tubing, and waterproof tape.
     
    Take a 5 ml tube that fits on your flow cytometer, wrap 4-6 coils around it, and secure them with tape very tightly. The goal is to make a jacket for the tube.
     
    Next, connect the tubing to the pump, and place in the water bath. The final step is to calibrate the system: put your sample solution in the tube, and add a thermometer. Set the water bath to a couple of degrees above desired temperature in the tube, and monitor it.
     
    Make sure you have a stack of paper towels or absorbent benchpads handy, as leaks could spring at any moment!
  4. Capturing initial Ca2+ response? On most flow cytometers, you have to take the sample tube off to add the stimulus. This means that the earliest Ca2+ response is missed. That is why there is usually a break in the data right after the unstimulated baseline.
     
    With the advent of more syringe- and peristaltic pump-based systems, this may become a moot issue for future experiments.

Calcium Responsive Probes

The first class of Ca2+ probes are those that increase fluorescence in the presence of free calcium.

Two of the prototypical dyes are shown below, in Figure 1. Fluo-3 and Fluo-4 are both excited by 488 nm light, and fluoresce in the low 500 nm range. Based on data from the Molecular Probes Handbook, Fluo-4 has a better response, compared to Fluo-3, in a FLIPR readout. The original paper describing Fluo-4 also indicated that it performed better than Fluo-3. As with everything in flow cytometry, make sure to test the reagents that will be used in the assay.


Figure 1: Fluorescence spectra of Fluo-3 and Fluo-4.

There are several additional probes that can be used in this mode.

To use these, cells are first loaded with a membrane-permeant version of the dye, which is then cleaved by esterases to release a charged version that remains trapped in the cell.

Dye titration should be performed to optimize loading conditions for each cell type.

For difficult-to-load samples, addition of 0.01-0.02% Pluronic acid has been shown to facilitate dye loading. It is critical to keep cell concentrations consistent from run to run.

Single response dyes are easy to use and require only lasers that are available on pretty much every instrument in the field. Unfortunately, different levels of loading will often result in different responses.

The second class of Ca2+ probes are the ratiometric dyes.

The most commonly used ratiometric dye is Indo-1. It is excited by the UV laser and shifts its emission spectrum when it is bound to calcium (Figure 2). By evaluating the ratio of free to bound, a more accurate and loading-independent kinetic reaction can be measured.

Figure 2 : Excitation and Emission profile for Indo-1.

For those instruments that don’t have a UV laser, it is possible to Fura-Red as a ratiometric dye. In the absence of Ca++, the dye would be best excited by the 488 nm laser. In the presence of Ca++ the best excitation would be off the 405 nm laser (Figure 3). Thus, the ratio of the emission off the 405 nm laser divided by the emission off the 488 nm laser will provide a ratiometric response similar to Indo-1.

Figure 3: Excitation and Emission profile for Fura-Red

Running a Calcium Flux Experiment

Gather the cells and label with the dye of choice. Keep the unused cells in the dark at RT while conducting a flux. Make sure that the stimulation reagents are ready and that a pipet is dialed in to deliver the correct amount of stimulant to the cells.

Another reagent to have on hand is a calcium ionophore. The 2 most common compounds used are ionomycin and A23187.

When added to cells, the ionophore will shuttle calcium ions across the plasma membrane to cause the maximal response in the cells.

While some protocols will have the ionophore control run on a separate tube, it is often better to add ionomycin at the end of the acquisition to get a measurement of the maximal fluorescence of the cells. You then have data for baseline, stimulated, and maximal calcium response for each tube.

With early instruments for ratiometric measurements, it was necessary to collect the data, trying to balance the 2 emissions. With newer instruments, a ratio can be set up and collected at the time of data acquisition.

This is achieved by making sure the signal from both dyes is on-scale and, having a histogram plot — or better yet, a plot of ratio vs time — and making sure that the ratio of the dyes is in a reasonable place (somewhere along 1000) as shown in Figure 4. And yes, you do run calcium flux in linear scale.

A typical analysis is shown in Figure 4 from Graf et al. (2007). In this experiment, the authors used Indo-1 to examine calcium flux in T-cells that had formed conjugates with wild type (red) or mutant (blue) antigen-presenting cells.

There is an initial baseline to establish the fluorescence level. The first arrow indicates where the conjugates were formed, which causes a break in the data.

The red line shows an increase in calcium flux, indicating that the wild-type APCs induce a calcium flux in the T-cells. The blue line doesn’t change, demonstrating that the mutant APCs are not capable of inducing a calcium flux. The green line, representing unbound T-cells, do not flux calcium either.

At the second arrow, a little over 10 minutes from the start of the experiment, the authors added Ionomycin. This causes an influx of calcium into the cells and, as can be seen, all of the T-cells showed a positive calcium response.

Figure 4: Figure 6c from Graf et al. (2007), showing the typical analysis of a calcium flux experiment.

Notice how the authors cleaned up the data by plotting the median fluorescent intensity at each time point. Additionally, the data is smoothed, which helps reduce noise. There are several common methods for smoothing data.

The first method is a moving average. In this process, at time point t, the mean of n to n+25 is calculated. At t+1, the average of the next N data points (n+1 to n+26) is calculated, and so on. The moving average value can be chosen based on the experimental needs.

A second method is to use a Gaussian smoothing, which places less weight on values farther away from the center. The choice is up to the investigator.

This graph shows the one limitation on traditional analysis, and that is the break that prevents the detection of the earliest calcium flux. There used to be a system, called the Time Zero System, sold by Cytek that was able to get better measurements of the initial calcium flux.

With the advent of newer syringe-driven systems, there became another option, as demonstrated in Vines et al (2010).

Using the Accuri C6, which has a pump system, the sample tube is not under pressure so it is possible to add stimuli directly while continuing to acquire data. Figure 5 below, taken from Figure 2 from the Vines paper, shows how this data looks on the Accuri and a Cyan.

Figure 5: Figure 2 from Vines et al (2010), showing how having the ability to add stimuli without removing the tube from the sip allows for the earliest calcium flux to be captured.

In this paper, the authors were limited to using Fluo-4, so the data could not be acquired in a ratiometric manner. With the improvements on the newer cytometers, it should be possible to perform similar experiments and capture this early response using a ratiometric measurement.

To summarize:

  • Start with identifying the instrument to be used, which will dictate what fluorochromes can be used.
  • Use a ratiometric dye whenever possible, because this allows the data to be acquired without concerns over dye loading differences.
  • Make to sure have a reagent, like ionomycin, to determine the maximal fluorescent signal.
  • Choose an appropriate smoothing model.
  • Decide how best to present data (graphically, fold over control, etc.).

Calcium experiments can be very informative and, with the advent of cheaper UV lasers, more and more researchers can use ratiometric measurements to evaluate the signaling processes in phenotypically defined populations.

To learn more about How To Achieve Accurate Flow Cytometry Calcium Flux Measurements, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

How To Choose The Correct Antibody For Accurate Flow Cytometry Results

$
0
0

Next to the flow cytometer itself, the most important component of a flow cytometry experiment comes down to the antibodies. It is by using antibodies conjugated to fluorescent markers that we are able to identify our specific cells of interest and quantitate the amount of our target on the cell.

When I started in flow cytometry, I was immediately taken by the technology, and only later began to appreciate the importance of understanding what my reagents were and how they worked.

With the development and rise of monoclonal antibodies, each lab or group gave them a different name. This name could be the specific clone, where the antibody was harvested, or perhaps the target to which the antibody bound.

You might have attended a talk where one investigator discussed their studies on VLA-4, while a second might have discussed information obtained using Clone 9C10. These are both the same thing, but it was like the wild west out there.

This led to the development of the cluster of differentiation (or CD) nomenclature that we use today. First established in 1982 at the first International Workshop and Conference on Human Leukocyte Differentiation Antigens, researchers from around the world established criteria to classify the different monoclonal antibodies that had been generated by putting clones that targeted the same target in the same CD.

At present, there are well over 370 different clusters of differentiation that have been recognized on human cells. It has also led to the proliferation of those lovely posters that vendors put out, showing the relationship between a CD antigen and the immunophenotype, that is the cells that bind that antigen, and allow us to identify and classify the cells of interest.

For the flow cytometrist it is important to determine how best to select the correct antibody for a given experiment based upon several factors, including:

  1. The target
  2. The assay
  3. Personal experience or preference
  4. Lab history
  5. Published history/the literature
  6. And minor choices such as cost, company, fluorochrome, etc.

Someone entering the field may be very confused as to how to make these choices, especially when they’re asked to design a new polychromatic flow cytometry panel on their own.

Since panel design and antibody choice tend to go hand-in-hand, let’s review some of the important ways to determine how best to choose your antibody, identify the best fluorochrome choice, and build a panel around your hypothesis.

Choose An Appropriate Antibody

Going to the literature is easy, right? Just see what others are publishing and use that. Even better, head over to the website of your favorite vendor. They should have everything you need.

Well, maybe, but maybe not. Let’s take for example, anti-Human CD3. Checking the Human Cell Differentiation Molecules website, they recognize 5 different clones that target human CD3.

To see how prevalent these were in the literature, 2 different metrics were used. The first was to search the Benchsci.com database for these clones and report the published figures for each clone (search term flow cytometry and clone name). The second was to look at the published OMIPs and for each of the human T-cell panels score which OMIP used which clone.

These 2 distinct measurements were quite revealing.

FIGURE 1: CD3 clone use in flow cytometry. Raw data shown in the table below.

Noting that these use 2 different databases and criteria, it is very clear that (1) HIT3a is more represented in one database while UCHT1 is more commonly used in the OMIPs. Which one is better?

The table below shows a bit more detail about some of these molecules. Notice the top 5 are recognized by HCDM, and the bottom 2 were found during the OMIP review.

The solution to this issue is clearly more research.

Don’t just go with what you first find in the catalogue that binds your target. Look deeper and see if there are reasons one reagent is better than another.

Using google-fu,and tools like BenchSci.com and the Antibody Registry, can help you find the targets and where you can get them. Both these resources offer links to publications to make it easier to learn about the reagent in question.

Choose An Appropriate Format

There are several different production methods for antibodies, each with their positives and negatives. These production methods also impact the quality of the binding of the antibody and ultimately the staining of the cells.

1. Polyclonal Antibodies The original way to produce antibodies, injecting an animal with the antigen and collecting the serum is a cheap and easy way to generate multiple antibodies to a given target.

There are disadvantages of this method, especially in the case of using polyclonal reagents in flow cytometry. As shown below in a Western blot assay, it is possible to identify non-specific binding of the polyclonal antibody.

However, in a flow cytometry experiment, it is impossible to distinguish the non-specific staining from true staining, and Isotype controls do not help with this.

Figure 2: Western blot analysis showing the identification of non-specific binding (in red) compared to positive staining (blue). In flow cytometry, it is not possible to easily identify non-specific binding of this nature.

2. Monoclonal Antibodies These are created by fusing an antibody-secreting cell to an immortalized cell to produce a hybridoma cell line that produces a single antibody.

Only those cell lines that secrete high levels of antibody that have high binding efficiency to the target are kept. Köhler and Milstein first created these fusions, leading to the Nobel Prize in Physiology or Medicine in 1984.

Hybridomas have the property where they all secrete an antibody identical to that of the parent cell.

Specifically, these reagents bind to the same epitope, making them ideal for flow cytometry experiments.

One worry about using hybridomas is the concern of cell-line drift, where high-passage number cultures may no longer produce the same antibody as when first characterized.

3. Recombinant Antibodies A newcomer to the flow cytometry world, these are made using recombinant DNA technology, where the genes for a specific antibody are isolated and cloned into a vector for expression.

Since they are genetically engineered, it is possible to change the Fc portion of the antibody, reducing or even eliminating binding to the Fc Receptors on target cells.

These reagents also have the advantage of being less expensive to produce compared to traditional hybridoma techniques, and with the various tools available to manipulate DNA, it is possible to make modifications to the antibody DNA sequence to improve affinity to the target of interest. It is also easier to target those antigens that have been difficult to make by traditional methods.

Bradbury et al., in a commentary in Nature, suggest that moving to recombinant antibodies will improve the quality of these critical reagents.

While monoclonals still dominate the flow cytometry field, as more recombinant antibodies become available, it would seem they will eventually take over the field. This will really help researchers and be a key to improving reproducibility.

How To Start Using A New Antibody

When it comes to using antibodies, one of the biggest issues is the potential for non-specific binding (NSB).

This is where the antibody binds to any cell and increases the background signal. It can also be due to specific, but unwanted, binding of the Fc portion of the antibody to the Fc receptor on the surface of some cell types.

The only way to minimize NSB in an experiment is through proper experimental design and qualification of the reagents.

Fortunately, there are several different steps that can be addressed and 2 — titration and blocking — that specifically relate to antibody use.

1. Titration In the case where there is an excessive amount of antibody in the staining solution, antibodies will bind with low affinity to off-targets on the cell. This leads to an increase in background and a reduction in the loss of signal.

The solution for this is to properly titrate your reagents.

Typically, the titration should start with twice the recommended concentration of reagent, through 6-8 serial dilutions of the antibody. The Staining Index can be used to compare the different dilutions. A typical titration curve is shown below:

Figure 3: Typical titration curve. The optimal concentration on this curve would be defined by the midpoint between the shoulders of the curve, where the staining index begins to decrease.

2. Blocking Using a special reagent to precoat the cells reduces the possibility of the fluorescent antibody binding the cells non-specifically.

Everyone has a favorite reagent (e.g. Fc Block, normal serum, purified IgG) and a favorite concentration, which makes it hard to determine the best approach.

One should perform an experiment where different concentrations of blocking reagent are used and the signal of the negative population is examined to determine which concentration is best.

Andersen and co-workers (2016) did just this, trying to determine both concentration and reagent to use to work with their cells (human PBMCs and monocyte derived macrophages).

They blocked the cells with different reagents at different concentrations, and stained the cells with a labeled isotype control. The median fluorescence intensities (MFI) of the blocked and labeled cells was compared to unstained cells to determine the best reagent.

Based on this data, the authors concluded that the use of purified human IgG was best and least expensive as a blocking reagent. They also recommended that the cells should be blocked for 15 minutes on ice.

Other steps in reducing NSB include the use of viability dyes, adding dump channels to the panel, and gating on similarly sized cells.

With the added emphasis on reproducibility, it is critical to look at every step where experiments can be improved. No single step makes an experiment more reproducible. Rather, it is a process of making changes at each stage that leads to reproducibility.

Antibodies comprise a critical component that needs to be reviewed. As Bradbury et al. in a commentary in Nature pointed out, the global spending on antibodies is about $1.6 billion a year, and it is estimated about half of that money is spent on “bad” antibodies. This does not include the additional costs of wasted time and effort by the researcher using these bad antibodies. Using tools to identify the best reagent to use, considering a switch to recombinant antibodies, and properly validating reagents for use in an assay are 3 steps that will improve the reproducibility of your experiments.

To learn more about How To Choose The Correct Antibody For Accurate Flow Cytometry Results, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training


5 Considerations For Statistical Analysis Of Flow Cytometry Data

$
0
0

Congratulations, your grant has been funded! Now comes the hard part — performing the work that you are being funded to do. This means generating data and publishing papers. What was that hypothesis again? It must be in the grant somewhere, right?

For the sake of this blog, the grant is to study the effects of Cordilla Virus, which is known to cause lipid membrane flipping on CD8+ T-cells. This flipping results in phosphatidylserine expression on the outer membrane, causing infected cells to be phagocytosed by macrophages. A lead compound, Masiform D, has been identified that shows promise in reducing the viral load of patients infected with Cordilla Virus.

To avoid even the appearance of HARKing — Hypothesizing After The Results Are Known — it is important to start at the beginning of the statistical analysis process even before the first experiments are performed. This process consists of 5 steps:

1. Set the Null Hypothesis

The null hypothesis (H0) is a statement of what we think the state of the system is. In this case, the state of the system after treatment would be that there was no change in the system — i.e. the means between control and experimental are equal.

When performing statistical inference testing, this is the baseline state of our system. We are going to demand a great deal of proof (“beyond a reasonable doubt”) to reject the null hypothesis and accept the alternative hypothesis (HA).

If our data rises to the level of confidence to reject the H0, we are very sure in our HA. If our data does not allow us to reject the H0, it doesn’t mean that the H0 is true, it just means the evidence doesn’t support rejecting it.

When stating the null, it is important to remember that the “equals” sign will be associated with the H0, and not the HA. Additionally, if we are predicting the effect will be in one direction or the other, that is acceptable in the H0.

Going back to our hypothetical grant, we are looking to see if Masiform D reduces the viral load on patients infected with Cordilla Virus. Since we only care if Masiform D decreases viral load in our patients (as opposed to increased or stable load), the H0 and HA could be stated like this:

H0: Viral load (VL) in patients infected with Cordilla Virus and treated with Masiform D (MD) remains the same or is increased compared to patients infected with Cordilla Virus who were not treated with Masiform D (UT) : VLUT ≥ VLMD

HA: Viral load (VL) in patients infected with Cordilla Virus and treated with Masiform D (MD) is decreased compared to patients infected with Cordilla Virus who were not treated with Masiform D (UT) : VLUT < VLMD

Notice 2 important factors about the H0 and HA

  1. They are mutually exclusive — this means that the H0 and HA do not share anything in common.
  2. They are collectively exhaustive — this means that all possible outcomes are explained by either the the H0 or the HA.

Having set the stage, it is time to build the other factors that will go into determining whether we accept or reject the H0.

2. Establish a threshold

The threshold (or α) can be thought of as the finish-line. If the analysis of the data crosses this threshold, the H0 is rejected and the HA is accepted.

This threshold is typically set at 0.05, based on convention. However, lowering the threshold to 0.01 or even 0.001 may be more appropriate.

There is no easy guidance for this choice, except that the lower the threshold (higher p-value), the easier it is to find significance, and to commit a type I (false positive) error. The table below is based on this article which categorizes p-values based on the consequences of a false positive.

p Value assumes a 2-tailed test Consequences of False Positive
0.01 Death — think clinical trials
0.05 No publication — standard for most publications
0.1 You lose time and money — increased false positives means chasing down more leads, but few other consequences
0.2 You lose more time and money — increased false positives means chasing down more leads, but few other consequences
0.49 You are wrong — just a bit better than flipping a coin.

As the desired p-value decreases, it is valuable to increase the “n” — that is, the number of samples — which can be determined using the Power calculation, which helps provide an estimate of the number of samples needed to be collected.

It is also a measure of the chance of a false negative (𝛃) since Power = 1-𝛃. You can find a power calculator here for free. It’s always a good idea to check with your local biostatistician before you finalize your plan!

3. Performing the experiments

This is the fun part of the process. Make sure to go through the process to design and validate an experiment that is discussed in other parts of the blog, such as: panel design, instrument optimization, and preparing for your first experiment.

In the case of our hypothetical experiment, flow cytometry was chosen to measure the CD8+ cells in our patients, and to determine the amount of phosphatidylserine on the surface of these cells using the Annexin V reagent.

After the experiment is done, the primary analysis will be performed. A guide about gating and primary analysis can be found here. The important point is that the correct numerical data is extracted from the primary analysis, and the gating is done using all the controls necessary to identify the populations of interest.

4. Performing the statistical test

This is where everything comes together. The experiments are complete, and the numerical data has been extracted.

In the case of our hypothetical experiment, CD8+ cells were identified through immunostaining and the amount of Annexin V staining was described by fluorescence intensity.

Since this is a measure of changes in fluorescent intensity, the data is used to calculate the resolution metric, the RD. The RD is:

medianpos-medianneg/rSDpos+rSDneg

Thus, the dataset is the list of RD for patients who have been treated versus those who have not been treated. From there, the statistical analysis will be performed.

In this experiment, the statistical test that will be used is the Student’s T-Test. This test relies on the T-distribution, which looks like a normal distribution, but has heavier tails, because of something called the “degrees of freedom”.

Mathematically, the degrees of freedom are measured as the sample size minus 1 (n-1). What does the degrees of freedom really mean?

This is the constraint of the system. For example, if you know 100 values and have the mean, the first 99 can take any value. The 100th value is completely constrained because that value must make the average that has been determined. The same applies for the median. Thus, only n-1 values have freedom.

Back to the T-distribution, the figure below shows the difference between a normal distribution (green), and a distribution with 3 degrees of freedom (blue), and 10 degrees of freedom (red), and the 95% confidence intervals are the vertical lines of the same colors.

As can be seen, the fewer the number of samples, the larger the differences have to be to determine statistical significance. The “rejection region” is the area under the curve to the left and right of the 95% lines. If this was a one-tailed test, then only one of these areas is considered.

Mathematically, there are 2 ways to determine if the data is statistically significant. In either case, we calculate the T statistic, which is defined as:



Based on the threshold (1-CI, or α), it is possible to calculate the Critical Value (CV). The decision rule becomes if T*>CV, we reject the H0, and thus accept the HA. The alternative way to use this is to use T* to calculate the p value. Under this method, if the α>p, we would reject the H0, and thus accept the HA.

It doesn’t matter which method you use, although the second method is a bit easier to understand and easier to compare to the threshold. Both approaches get you to the same answer, but most statistical programs use the p value method.

FIGURE 1: Comparison of the T distributions with the Standard Normal Distribution.

5. Stating the results

The last step is to state the conclusion. Based on the results of the statistical analysis, there are 2 options: either the H0 is rejected, or not.

If we reject the H0, that means we accept that we are confident in the HA.

One way to consider the interpretation of the p-value is to say that if a random sample were chosen, what is the probability that it would be at least as extreme as the observed data, under the assumption that the null hypothesis is true?

Thus, it can be interpreted as if the null hypothesis was really true, it would be extremely unlikely to see the kind of data that is being observed.

Does that mean that if we don’t reject the H0, we are equally confident that it is true? The answer is no — all it means is that there is not sufficient evidence to reject the H0, which is very different than saying that you should accept the H0.

Is that all? Does it stop at this point? Historically, this was the end of the analysis and papers with the phrase, “statistically significant” used to describe the data was published and research marched on.

However, over the last few years, the statistical community has been having a debate about the power of the p-value and how to properly interpret this. This article by Regina Nuzzo describes in detail some of the issues around how the p-value is interpreted, suggesting that, “…it is not as reliable as scientists assume.” This has lead the American Statistical Association to develop a statement about the p-value.

There is a role for the p-value in hypothesis testing, however, it is critical that we continue to evaluate the best tools to define statistical significance. These ideas will be the topics of future blog posts, so stay tuned!

To learn more about 5 Considerations For Statistical Analysis Of Flow Cytometry Data, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

3 Requirements For Accurate Flow Cytometry Compensation

$
0
0

For those new to flow cytometry, compensation is confusing at best and terrifying at worst. Likewise, those who have been doing flow cytometry since the analog ages may be holding on to practices that, while suited to the analog instruments, should be left to the annals of history. As such, a lot of time is spent discussing compensation and the best practices for this critical process.

There are 3 rules that guide proper compensation, and they’ve been written about extensively since they first appeared in the “Daily Dongle” in 2011. It is always good to review and, importantly, there are some caveats and assumptions baked into the rules which bear closer examination.

Compensation Rule 1: “Controls must be at least as bright or brighter than the sample to which the compensation will be applied.”

To ensure that the correct compensation value is calculated, we need accurate measures of our controls.

We use the slope of the line between 2 populations with different intensities in the channel of interest to calculate compensation. In theory, that calculation should yield the same result regardless of where the populations fall in the detector range. However, this is not the case in practice because there is generally greater error in the dim cell measurement than in the bright cells (Figure 1).

Figure 1: Compensation using dim or bright particles. As can be seen from this plot, if the dim particles were used for compensation at the intensities above, this value would be undercompensated. Axes are labeled with excitation line (B=488 nm) and the bandpass filter in front of the PMT.

Tacitly included in this rule is the requirement that the signal must be on-scale and in the linear range of the detector.

When the detector converts photons into photocurrent, it is important that this current is linear and proportional to the input. At voltage extremes, this relationship does not hold, so it is imperative that these bounds are determined and the signal maintained within them.

Likewise, it’s important to keep the signal on-scale. As shown below for the PE detector, the linear bounds are highlighted in yellow, and the upper scale in blue.

Figure 2: Linear bounds and upper limit of the scale for a PE labeled bead measured on a PE detector. Axes are labeled with the excitation line (G=532nm) and the bandpass filter in front of the PMT.

Thus, the first rule focuses on the expression level of the fluorochrome. Determining the best voltage for these controls is an important process in the development and validation of a panel.

Compensation Rule 2: “Background fluorescence should be the same for the positive and negative controls.”

This means that the autofluorescence of the carriers must be matched. The choice to use carrier cells or antibody capture beads depends primarily on 2 factors:

  1. Availability of cells
  2. Expression characteristics of the target antigen

If there are abundant targets on the surface of the cells and you have lots of extra cells, then using cells is no issue.

However, if the targets are not abundant (say on rare events or antigens with low levels of expression), using an antibody capture bead (ABC) is preferred.

ABCs allow you to save your cells for experimental tubes and capture large amounts of antibody to ensure the signal is at least as bright as the experimental (Rule 1), and it is the exact same fluorochrome as used on your sample (Rule 3).

To adhere to the second rule, it is important to avoid the use of the “Universal Negative” — an unstained sample (usually cells) that is collected separately from the positive control tubes and is used to set the background for compensation.

Figure 3: Comparison of using either beads or cells as the negative control for compensation. As shown in each of these plots, the beads were used for the positive control, and either beads or cells were used for the negative control. The lines help to visualize the slope of each uncompensated combination. Axes are labeled with excitation (B=488nm; R=633nm) and the bandpass filter in front of the PMT. The blue line represents the approximate slope between the negative (cells) and positive (beads), while the red line represents the approximate slope between the negative (beads) and positive (cells).

As shown in Figure 3, using the incorrect negative for comparison results in incorrect compensation values.

It is acceptable to use a combination of beads and cells to generate a compensation matrix, as long as you have matched positive and negative populations in each tube.

The use of cells for compensation particles is especially important for vital dyes, fluorescent proteins, reactive oxygen species, and other non antibody-based stains.

Compensation Rule 3: “Compensation controls must EXACTLY match the experimental fluorochrome and detector settings.”

Take the following 3 spectra: GFP, Brilliant Blue515™, and FITC. Each is excited by 488 nm light and measured in the “FITC” channel with a bandpass filter around 530/30 nm or so — however, their spectra are all subtly different (Figure 4).

Figure 4: Spectra of 3 different “green” fluorochromes.

Due to these differences, it is not possible to substitute one as a compensation control for the other.

Tandem dyes are manufactured and can degrade over time, so it is especially critical to use the identical tandem (same vial!) for setting compensation.

As shown in Figure 5, 2 different lots of the same fluorochrome (Cy7-PE) were acquired coupled to the same clone. Cells were stained with these 2 antibodies and the uncompensated data is shown. As can be seen from the different lines, if Lot #2 were used to compensate Lot #1, the resulting compensation values would be incorrect.

Figure 5: 2 different lots of the same tandem dye can have very different compensation values. Axes are labeled with excitation line (G=532 nm) and the band pass filter in front of the PMT.

Rule 3 also dictates that a single color compensation control must be collected for each fluorochrome in the panel. If 2 different panels are run at the same time, and these panels have different fluorochromes (especially tandem dyes), each panel needs its own compensation matrix.

Implicit in rule 3 is that the compensation control tubes must be treated identically to the samples. If you fix the cells, the controls must be fixed as well.

The effects of fixation or other treatments on fluorochromes is variable, so it is essential that the controls and samples are treated consistently.

A couple of final words regarding compensation. Compensation values are determined by a combination of fluorochrome properties and optimized instrument settings, not the carrier or antibody used.

When using beads as carriers, if the staining is off-scale, resist the temptation to decrease the voltage of the control, as this will only negatively impact your sensitivity. Rather, during the development of the assay, determine an appropriate amount of antibody with which to stain the beads, so that the fluorescent signal at the best voltage for the experiment meets the 3 rules above — especially that it is on-scale and within the linear dynamic range of the PMT.

What makes a good voltage? In practice, starting with an optimized voltage via peak-2 beads, CS&T, or other technique is a good start. Better still, is a voltration that takes into account the specific fluorochromes and cells being used in the experiment. How to go about determining this has been addressed here.

Finally, make sure to collect sufficient numbers of events. If using beads, at least 10,000 events should be collected. For cells, starting at a minimum of 30,000 events is good, but 50,000 is better. You want to have a sufficient number of events in your positive gate to have the best measure of the fluorescence.

So there they are, the 3 rules of compensation, and some important caveats that need to be remembered when setting up compensation controls for an experiment.

Looking beyond these 3 essential rules, make sure that the controls meet the other criteria addressed here, especially keeping the signal on scale and within the linear dynamic range of the detector. These steps will help ensure the highest quality compensation is obtained. Finally, avoid the temptation to manually adjust the compensation matrix, especially to make the data “look right”. Rather, determine what is causing the issues with the compensation by reviewing the data in the context of these rules. Also, make sure that a standard reference control is always run in the panel, to evaluate the whole staining process and help in troubleshooting when compensation “looks wrong”.

To learn more about the 3 Requirements For Accurate Flow Cytometry Compensation, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

How to Optimize Flow Cytometry Hardware For Rare Event Analysis

$
0
0

“Not everything that can be counted counts and not everything that counts can be counted.” — William Bruce Cameron (but often misattributed to Albert Einstein)

What does this quote mean in terms of flow cytometry? Flow cytometry can yield multi-parametric data on millions of cells, which makes it an excellent tool for the detection of rare biological events — cells with a frequency of less than 1 in 1,000.

With the development and commercialization of tools such as the Symphony, the ZE5, and others which can measure 20 or more fluorescent parameters at the same time, researchers now have the ability to characterize miniscule population subsets that continue to inspire more and more complex questions.

When planning experiments to detect — and potentially sort — rare events using flow cytometry, we need to optimize our hardware to ensure that optimal signals are being generated and that rare events of interest are not lost in the system noise. This noise is also exacerbated by poor practices when running the flow cytometer.

There are 3 areas of hardware limitations that we need to consider when performing rare event flow cytometry.

1. Speed of the fluidics

The first step in running cells on the flow cytometer is setting up the fluidics to ensure the best flow possible while minimizing coincident events and data spread.

Hydrodynamic focusing is the process which focuses our cells inside the core stream, pushes them along, and spreads them out along the velocity axis, so that the cells line up single file and go through the focal point of the laser beam.

But, if the differential pressure is increased, what happens?

An increase in differential pressure between the sheath fluid and the sample fluid being introduced to the flow cytometer causes the core stream to widen. And, as it widens, more cells can pass through the laser per unit time.

There are 2 reasons why this is a concern, especially for rare event analysis:

  1. 2 cells can pass through the laser at the same time, resulting in what is measured as a doublet, and therefore both must be excluded.

By having to exclude more cells, the chances of detecting a rare event decrease.

FIGURE 1: Impact of increasing differential pressure on flow cytometry data.

  1. As the core stream widens, the cells at the edge are more poorly illuminated, and therefore emit less intensely.

When we increase differential pressure, we increase the flow rate and core stream width, allowing the cells to move and meander within the core stream. Some of these cells will not be exposed to the full laser power.

Therefore, the CV of the data spreads and we lose resolution between 2 populations, as seen in the graph below, on the right.

FIGURE 2: Effects of differential pressure on flow cytometry data. Peak CVs spread at higher flow rates.

Thus, there is a trade off between speed of acquisition and the quality of your resolution.

Best practice for rare event analysis is to run the system at low differential pressure so that the event rate is no more than 10,000 events per second (depending on your instrument).

It is often even better to run at a lower rate, such as 5,000 events per second. While this means that acquisition time will take twice as long, the quality of data will be improved. Is the trade-off worth it? For rare event analysis, it is almost a requirement.

Newer technology, like acoustic focusing from Thermo Fisher, is helping to diminish this effect. Acoustic focusing uses a standing acoustical wave that forces the cells into the center of the core stream, allowing you to run much, much faster than a traditional flow cytometer, without the data spreading found in conventional systems relying on hydrodynamic focusing alone.

2. Coincident events and aborts

What is a coincident event and how does this impact the data?

It all starts with the measurement of the electronic pulse. The schematic of pulse generation is shown in Figure 3.

As a cell passes the laser intercept, photons are received by the PMT, which converts the photons into photocurrent. When the cell is fully inside the laser, the maximal number of photons is being generated and pulse reaches the peak (the height measurement) before falling back to 0.

Figure 3: Pulse generation as a cell passes through the laser.

A problem arises if a second cell passes into the laser intercept before the first pulse finishes being processed, and both events will be aborted, resulting in lost data.

Thus, the size of the pulse matters.

The size of the pulse is ultimately going to be the size of the cell plus the beam height.

A hypothetical 5-micron cell and a 20-micron laser beam yields a 25-micron pulse. The stream of a typical analyzer travels at 5 meters per second and 30 meters per second for a sorter. Thus, it takes roughly 0.83 microseconds on a cell sorter for the typical pulse to be processed.

On some instruments, there is an additional period added to this processing time, called the window extension, on BD instruments. This extension increases the time that the system is looking for a pulse and is depicted in Figure 4.

Figure 4: The impact of window extension on pulse processing.

Imagine a cell has just passed through the laser intercept and the pulse is being processed. The next event cannot enter this extended window space until this first cell leaves the window, otherwise it’s considered a coincident event and excluded. This window can be increased or decreased, based on the size of the cell.

The consequence of altering window extension is shown in Figure 5. Window extension was increased and the number of electronic aborts were measured using 2 different sort masks after approximately 600,000 events.

At higher window extensions, there can be as much as a 13% loss of events.

Figure 5: The effect of window extension on abort rate based on 2 different sort masks.

3. Electronic limitations set by manufacturers

The final piece of the hardware are those limitations that have been set by the vendors. These limitations can include the maximal number of events allowed in a file, the number of events per second that can be acquired, the flow rate, the number of gates in a gating hierarchy, and more.

It is critical to understand these limitations while planning the details of the experiment. For analytical flow, you may need to acquire multiple files of the same tube to ensure collection of sufficient event numbers. With sorting, gating hierarchy limitations require careful thought on how to identify the target cells.

Preparing for rare event analysis requires an understanding of the power and limitation of the instrument to be used. From how fast to run the fluidics, to how the signal is processed, to the number of gates that can be used in the sorting experiment, each factor impacts the outcome of the experiment. With these hardware limitations understood, the next step is to understand how to address the sample preparation and identification of the target cells.

To learn more about How to Optimize Flow Cytometry Hardware For Rare Event Analysis, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Procedural Limitations That Impact The Quality Of Rare Event Flow Cytometry

$
0
0

Having dealt with the hardware issues related to rare event analysis in the first part of this series, it is time to turn to our second focus: how samples are prepared.

Stem cells, circulating tumor cells, and minimal residual disease in cancer patients were all discovered through the power of rare event flow cytometry. When preparing for rare event analysis, sample preparation and data analysis must be taken into account at the beginning.

How will we stain our cells? How will we analyze our cells? What controls will we use to help us identify our rare events? What statistical methods do we use to analyze our results? Here are 6 procedural limitations that impact the quality of rare event flow cytometry data and how to optimize your assay to get the best results possible.

1. Staining your cells.

In every lab, there is the “Notebook”, the collection of time-tested protocols handed down from the PI and guarded by the senior technician. In the confines of the “Notebook” is the “Protocol” to stain cells for flow cytometry. What worked 30 years ago is good enough for today, right?

There are many things that have changed since the time the “Protocol” was written, including the best practices for staining cells, choosing fluorochromes, setting voltages, and more. It is time to review the “Protocol” and see if it has stood up to the test of time.

Flow cytometry requires a single-cell suspension. Cell debris and clumps compromise data at best, and can clog the instrument, thus ruining the whole experiment, at worst. Even with liquid samples (i.e. blood, marine water, bacterial cultures), care must be taken in how the cells are treated.

For example, check that the protocol states the RCF (Relative Centrifugal Force) to spin the cells down. Do not rely on RPM (Revolutions Per Minute) as a measure. For mammalian cells, a good rule of thumb is to pellet the cells around 180 x g. Below, is a handy chart that shows the relation between rotor radius (in mm) and the speed (in RPM) to achieve a given RCF.

You can calculate this yourself with the following equation:

RCF=(RPM/1000)2*r*1.118

For those new to working with cells, it’s important to point this out. A quick “zip” spin in the microfuge is going to result in nothing more than debris.

Remember to filter the cells as well. Typically, this is done before staining, but is sometimes required afterwards as well if the cells are clumpy.

For adherent cells, try to keep them cold and add EDTA, which will chelate Ca++ that is needed for cell-cell adhesion.

During sample preparation, it is critical to remember that the microscope is your best friend. Nothing helps to confirm clumpy cell preps, before they cause issues on the cytometer, better than the microscope and the Mark I eyeball.

For solid tissues (like tumors, lungs, etc.), each tissue will have to be optimized independently. One of the best references out there is the Worthington Tissue Dissociation Guide.

Having prepared a good cell suspension, make sure to block it properly. Andersen’s paper is a great guideline for determining the best blocking strategy for your cells. Don’t forget to titrate your antibodies!

Consider using a master mix as well, which contains all the antibodies that will be used. This makes it easier to stain multiple samples without trying to measure 0.25 𝞵l of each reagent.

2. Using a stability flow gate.

After staining, we turn our attention to issues that can arise when the sample is placed on the flow cytometer. These factors can impact your ability to find the cells of interest.

For example, while we expect uniform flow, this may not always be the case. Micro-clogging at the input end of the system, or clogs in the waste path, can impact the speed of the cytometer. In both cases, the consequences are the same: the data will be incomplete, as the cells will arrive at the downstream laser intercepts either too quickly or to slowly, and the pulses from these cells will not be matched up properly.

Additionally, it takes a few seconds to stabilize the flow at the beginning, and if the tube is run dry at the end, there will be a large increase in event number due to air bubbles into the stream.

To address these issues during data analysis, a flow stability gate should be considered as the first gate in the data analysis workflow.

Figure 1: Flow stability gating to remove uneven flow.

Using this plot of time versus forward scatter, it is obvious where there were issues. In this case, the major issues with this file are at the beginning and end. If there are flow stability issues in the middle of the file, or even if there are a few, one can create a series of gates and use boolean logic to remove these regions.

You can set flow stability gates manually, but there are also some new tools, including programs called FlowClean and FlowAI, which will automatically do this for you as part of the analysis process.

In the paper discussing FlowClean, they actually used it on files from the flow repository, and they found something in the order of 13% of the data files that were in the repository to have these types of abnormalities.

3. Removing artifacts — doublet discrimination.

Even with the best prepared samples, artifacts caused by cell aggregates can occur. To remove these artifacts, you need to apply doublet discrimination. This technique takes advantage of the fact that we understand how the geometries of the pulses are related.

In the image below, the top plot shows a voltage pulse for a single pulse.

It has a height and a width — how long it took to pass through the laser intercept.

And, the integral of the height over the width is the area, shaded blue.

Figure 2: Removal of doublets by pulse geometry gating.

Now, if we have two cells that enter the flow cell together, or enter into the laser intercept together, or are in such close proximity that they look like they’re together, you may see something that looks like the lower plot.

So now, the width is increased to 3 microseconds and the area is also increased.

That can be manifested by looking at the larger plot on the right, which depicts forward scatter height versus forward scatter area.

Cells along the diagonal are the cells we want. Cells outside this diagonal line represent doublets that we want to exclude.

Recently, Hazen et al discussed how the area scaling factor in the DIVA based instruments impacts the data quality and sort results. As the authors point out, since CS&T sets the ASF based on a small particle, this value may not be best for larger cell types. This article provides good guidance for developing protocols to train investigators to adjust the ASF on their instruments. This is especially important for sorting, but will benefit analysis of larger cells as well.

4. Removing cellular debris.

Many people are trained to use forward and side scatter to identify their cells of interest. Forward scatter and side scatter properties can change due to fixation, permeabilization, sample conditions, activation or an inactivation, or a disease state. Thus, it is better to let antibodies help identify the cells of interest, as they are binding specific targets.

There is still a place for the forward and side scatter gate, which is to clean up the data some more. This is affectionately called the schmutz gate.

This gate should be very generous, removing those events that are off-scale, those cells in the lower left of the plot, and the cell-debris events that don’t add to the data.

It is also a good time to remove the small, pyknotic events.

5. Eliminating dead cells.

With compromised membranes, dead cells can masquerade as live cells with bizarre phenotypes because they will uptake any antibody in the staining buffer. There are 2 ways we can improve our analysis: by eliminating dead cells using a viability dye, and by using a dump channel.

A dump channel is a mixture of antibodies to targets that are not of interest for downstream analysis. For example, trying to identify murine HSC, a mixture of B220, Ter119, CD3, CD4, GR1, and CD11b is often added in the same color — these are all markers of maturity and the HSC is defined as being undifferentiated.

Viability dyes like cell-imperant dyes including PI, 7-AAD, ToPro3, and DAPI are DNA-binding dyes that cannot enter cells with intact membranes. Cells with compromised membranes, however, will allow these dyes in where they will complex with DNA and fluoresce brightly.

These viability dyes are useful for both cell sorting and live cell analysis.

Amine reactive dyes, which bind to the amine group of proteins, are a second class of viability dyes. These are excellent dyes for assays involving the measurement of intracellular targets. There are so many different colors of these dyes, it makes it very easy to add them to any panel.

Shown in Figure 3, we combine a viability dye and a dump channel to remove uninteresting or dead cells, and take only live cells of interest forward in the analysis.

Figure 3: Combining the dump channel and viability dye in one plot to identify the cells of interest.

One trick with dump channel reagents is to get them all labeled with biotin, and this allows the use of a streptavidin-labeled reagent.

The 5 gates above are useful in cleaning up the data of issues that occur due to instrument-related issue, as well as eliminating unwanted cells.

6. Placing your gates.

Gate placement is very important because gating is an all-or-nothing process. When you put a gate on the plot, the cells that are inside the gate continue forward, and the cells outside of the gate are thrown away, even if they’re only one pixel away.

It is critical to place gates correctly, using the controls that have been run with the experiment, to help with this process. One of these controls is fluorescence minus one or the FMO control.

Figure 4 shows how to use an FMO. Start by gating on the fully stained sample before applying the gate to the FMO controls. If the gate is poorly placed, and there is a large number of events in the gate on the FMO plot, adjust the gate until the number is reduced to an acceptable level. You can set this percentage to whatever you are comfortable with. A good place to start is 0.1%, which is based on the fact that 99.7% of the data in a normal distribution is within 3 standard deviations from the mean. Whatever the percentage, make sure to explicitly state it.

Figure 4: The FMO control used for setting and confirming gates.

FMO controls are those cells stained with all antibodies except for one. These controls are essential for setting up good gates, especially for rare event analysis and emergent markers.

During the process of panel development, start with all the FMO controls, and during the validation, identify the FMOs that are critical for identifying the populations of interest.

The other thing that’s very useful for placing your gates is having a known positive control and a known negative control to help define your populations of interest.

By the same token, a reference control — a sample with consistently and reproducibly known behaviour in your assay, that is frozen down, and thawed and stained every time you run the experiment — is also useful for setting your gates properly.

Rare event analysis requires patience. It involves optimizing the sample processing and straining. It involves understanding how the flow cytometer works and monitoring the run to identify if things go wrong. It involves properly designing the analysis workflow and using all the information that is in the flow cytometry data file. Couple that with ensuring the experiment has been properly designed for statistical analysis, and it will be possible to identify these rare events with rigor.

To learn more about Procedural Limitations That Impact The Quality Of Rare Event Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Statistical Challenges Of Rare Event Measurements In Flow Cytometry

$
0
0

To conclude our series on rare event analysis, it is time to discuss the statistics behind rare event analysis. The first 2 parts of this series covered the hardware aspects of measuring rare events and some specific recommendations for gating/analysis of rare events.

It is necessary to sort through hundreds of thousands or millions of cells to find the few events of interest.

With such low event numbers, we move away from the comfortable domain of the Gaussian distribution and move into the realm of Poisson statistics.

There are 3 points to consider to build confidence in the data that the events being counted are truly events of interest and not random events that just happen to fall into the gates of interest.

1. How do you know if an event is real?

How do you know that your rare event is real? When subsetting the population, you might have an occurrence rate of 0.1% or lower. This means that for every 100,000 cells, 100 cells or fewer will be in the final gate of interest.

How can you confirm and be comfortable they are real?

In Poisson statistics, the number of positive events is the important factor, not the total number of events.

In Poisson statistics, the mean and variance of the distribution are equal to the number of positive events. The standard deviation is the square root of the variance.

So, if you have 2 events in a region the CV of that data is roughly 71%, whereas with 100 events, the CV drops to about 10%.

But, what does this really mean?

In this paper by Maecker and coworkers, the authors looked at inter-lab CV in flow cytometry experiments, and estimated it was high as 40%, some of which could be reduced by centralizing analysis. More importantly, the inter-lab CVs were highest (57-82%) on samples where the average percentage of cells was below 0.1%.

And, with as few as 12 events, that leads a standard deviation of 28%.

The frequency precision of our measurement is now dominated by assay errors, not the rare events that are analyzed.

With rare event analysis, you demonstrate significance through assay reproducibility. The number of samples that should be measured can be determined using the power calculation, which is discussed in more detail here.

2. How many total events do you need?

Statistically, assay variation can be a major source of error for this analysis. That leads to the question, “How many events do you need?”

The short answer is that whenever possible, collect as many events as possible.

As discussed previously, there may be limitations imposed by the hardware and software which limit how much data can be collected in a single file. This means you may have to collect multiple files from the same tube.

In third party software, it is possible to do some preliminary gating to reduce file size, and concatenate multiple files after this preliminary analysis to make the final gating more complete.

Turning back to how many events is enough, there is more than one n. To show the significance of the data, the analysis must be repeated multiple times (i.e. power the experiment appropriately) and have the correct complement of appropriate negative controls.

The data above shows both of these concepts. On the left, is the gating strategy and the control (normal patient control), while on the right are the results of several analysis runs on 2 patients to show the differences between the 2 populations.

The statistical analysis between these 2 show that there is a significant difference, as denoted by the asterisk.

Returning to the question of how many events is enough, the question to ask is, “What is the CV required for analysis — what spread of the data is acceptable?”

Is 10,000 events enough?

The chart below shows the coefficient of variation (CV) value for a given frequency of cells. The CV is related to the number of positive events, and is defined as the SD/mean.

In general, a lower CV is better. The CV is another way to express the precision and repeatability of an experiment.

Using this table, if a broad CV is acceptable, with a cell frequency of 0.1% 10,000 events is enough. However, a 10% CV requires 100 positive events and you can see now that 10,000 events is only good if at the 1% range.

For very rare events, a 10% CV for very rare populations requires collection of a million, or even 10 million, total events. Collecting at a rate of 10,000 events per second, it would take 1,000 seconds to collect 10 million events, or 16 minutes.

The CV is going to relate to the ability to identify a difference between 2 populations. This, in turn, will be related to the power of the experiment. Since we have the standard deviation of the population, it makes the calculations easier. However, the difference between the control and experimental will drive this.

In this paper by Mario Roederer, he discusses this issue of how many events you need to know if something is real. According to this paper, one of the important things to do is compare your positive sample to a set of controls so that you can interpret the data correctly.

There’s no arbitrary number of events that is the “right” number.

Even 12-14 positive events may be accurate, based upon your knowledge and the data generated by your controls.

3. How do you sort rare events?

The ability to sort cells for downstream applications is one of the most powerful applications of flow cytometry. Poisson statistics again play a role in determining an appropriate event rate.

If the drop drive frequency is 80 kHZ, or 80,000 droplets being generated per second, how many events per second should you run? Remember that a cell sorter sorts droplets, not cells, per se, but the cells are contained within the drops.

Depending on the sort envelope, the sort decision can include 1 or 2 droplets. So, what is a reasonable event rate?

When the event rate is equal to the drop drive frequency, Poisson statistics predict that a little under 40% of the drops will have no cells, about 40% will have 1, about almost 20% will have 2, and about 5% will have 3 or more cells.

If the event rate is ½ the drop drive frequency, 7.5% of the droplets will have 2 cells. When the event rate is ¼ the drop drive frequency, about 80% of the droplets are empty and about 2% of the drops will have 2 events. Going to ⅙ the drop drive frequency, the improvements are minimal.

So, if the drop drive frequency is 40 kilohertz or 40,000 droplets per second, the event rate should be no more than 10,000 events per second.

What does this mean practically?

This chart can help you determine how long it will take to sort, based on the frequency of the drop delay and the frequency of population, assuming 100,000 cells are needed for a downstream application.

Sort operators are often asked if there is a way to reduce the time it takes to sort, especially with a rare event population. Since there is a ceiling on event rates, our only option is to enrich the sample to increase the proportion of desired events.

This can be done using a depletion assay with magnetic beads from Miltenyi Biotec, the IMAG system, Dynabeads, and others.

In these systems, cells are tagged with an antibody label conjugated to a magnetic bead then exposed to a magnet. The cells that are labeled are held by the magnet, the cells that are not labeled stay in suspension or pass through the column for collection and downstream sorting.

Let’s look at when it might be helpful to incorporate this pre-sort enrichment step.

Starting with 100 million cells and a desired population at 0.01%, if you took those 100 million cells and sorted them at 20,000 events per second, it would take about 83 minutes to do the whole sort.

If we take those same 100 million cells and perform a magnetic bead enrichment, which will take about 45 minutes using one of the various magnetic isolation kits, the untouched cells will be about 10 million cells and the population of rare events is enriched to 0.1%.

Sorting those 10 million cells at 20,000 events per second will only take about 10 minutes, compared to 83 minutes pre-magnetic bead enrichment. The faster sort means that your cells are going to be healthier because you can get them back into culture, or into whatever buffer they need, more quickly.

After all the tweaking of the hardware and optimizing the data analysis, the statistics must be considered. Poisson statistics dominate rare event analysis. From determining how many cells to collect, to how fast to sort cells, the number of positive events is critical for determining the statistics involved. The charts and data in this blog can help design your next rare event analysis experiment, and help provide the basis for improving reproducibility and consistency of the experiments.

To learn more about Statistical Challenges Of Rare Event Measurements In Flow Cytometry, and to get access to all of our advanced materials including 20 training videos, presentations, workbooks, and private group membership, get on the Flow Cytometry Mastery Class wait list.

Flow Cytometry Mastery Class wait list | Expert Cytometry | Flow Cytometry Training

Viewing all 298 articles
Browse latest View live