Today (November 8th), we worked with cells from the blood-brain barrier using microscopy! By definition, microscopy is "investigation using a microscope." To refresh my knowledge of the microscopy processes we use, JP gave me this link to read. In our work, we used Alexa 488 and Hoechst 33342 (also known as DAPI). Before I arrived, JP fixed the cells and added the primary antibody. We had to decide whether or not to permeablize the cells before adding the DAPI. We decided to try imaging the cells without permeablization first. We added a DAPI solution to the sample, incubated the cells, and then removed the excess solution. We then washed the sample a couple times to remove excess dye.
Once the sample was ready to be imaged, we had to decide what wavelengths of lasers to use to image them. To do this, we used a program called SpectraViewer. The resulting graphs are shown below.
The vertical line on the graphs represents the wavelength of the selected laser. The green curve represents the Hoechst 33342 and the red curve represents the Alexa 488. The shaded regions represent the region that will be recorded. We decided to use the 405 nm laser for the Hoechst 33342 and the 488 nm laser for the Alexa 488.
To image the cells, we used a Zeiss Spectral Confocal Microscope (seen in the image below). It was so cool to be able to see the cells magnified to such a high degree!
The nuclei were very clearly visible, but the staining from the Hoechst 33342 was not visible. After viewing the images, we decided that it would be necessary to permeabilize the cells and re-add the Hoechst.
I won't be able to go to RPI next week because I will be in Kansas City for my national championship horse show, but I can't wait to return on the 22nd to see the images from the cells and continue our work!
Saturday, November 9, 2013
Sunday, November 3, 2013
Analyzing Peptide Mapping Data
On Friday (November 1st), I did a lot of work with analyzing peptide mapping data using the Origin program. But before I get into that, JP said that the peptide formation that I prepped for last week came out perfect! Anyway, I worked with three different sets of data from the same experiment- one set of 20 peptides and two sets of 56. After copying the data from a previous Excel sheet, my first step was to change the settings of all of the odd-numbered columns to "Y-error" because it represented standard deviation data. After I had done that for all three sets of data, I adjusted the fit curves by using a one-site bind pharmacology non-linear cure fit. In that process, I had to change the k-values of every data set to 1 to standardize the fits.
Our goal is to look at the data for each of the peptides and determine which peptides have low Kd (affinity towards the substrate) values and high R-squared values (good fit to the best-fit line). The area we are looking for is represented by the green area above. In looking at the plots, I narrowed the range to an R-squared value of 0.5 to 1 and Kd value of 3E-6 to 8E-6, and changed the data point labels to display their peptide numbers. Looking at the layout of peptides in this region, we determined that the most accurate set of data is the second set of 56, which makes sense because those peptides were made in 2012, while the peptides used in the other two sets were made in 2011.
After looking at those plots, I the plotted Peptide # vs. 1/Kd. This produced plots with many sporadic troughs and peaks. JP introduced me to the idea of curve-smoothing. Using the Kd values of the peptides, we are ideally looking for the Kd values of individual amino acids. To do this, we use averages of the Kd values of different peptides. For example, A would equal KD1, B would equal the average of KD1 and KD2, etc.
Using an Excel template that JP made for a previous experiment, I plotted averages of three up to fifteen peptides. While the fifteen-peptide averages obviously smoothed the curve the most, it is hard to determine the correct plot we should use because as the curve gets smoother, it hides certain important peaks.
After I finished my analysis work, JP showed me some of the brain cells that he is culturing! He just started growing them on Tuesday, so they had not yet grown into a full layer of cells. They will be used to run a second trial of an experiment we did last winter, which involves testing the permeability of the blood-brain barrier by passing different sizes of sugar molecules through a layer of cells. I can't wait to continue working with the data and see where that experiment goes!
After looking at those plots, I the plotted Peptide # vs. 1/Kd. This produced plots with many sporadic troughs and peaks. JP introduced me to the idea of curve-smoothing. Using the Kd values of the peptides, we are ideally looking for the Kd values of individual amino acids. To do this, we use averages of the Kd values of different peptides. For example, A would equal KD1, B would equal the average of KD1 and KD2, etc.
Using an Excel template that JP made for a previous experiment, I plotted averages of three up to fifteen peptides. While the fifteen-peptide averages obviously smoothed the curve the most, it is hard to determine the correct plot we should use because as the curve gets smoother, it hides certain important peaks.
After I finished my analysis work, JP showed me some of the brain cells that he is culturing! He just started growing them on Tuesday, so they had not yet grown into a full layer of cells. They will be used to run a second trial of an experiment we did last winter, which involves testing the permeability of the blood-brain barrier by passing different sizes of sugar molecules through a layer of cells. I can't wait to continue working with the data and see where that experiment goes!
Monday, October 28, 2013
Purification Experiment Set-Up
Last Friday (October 25th), JP introduced me to another project that our lab is working on. Previously, we had two different ways to produce free peptide. First, we could grow peptides off cellulose and dissolve the cellulose in DMSO to leave the free pepide in solution. Second, we could grow peptides off of a polymer bead that has functional groups for the peptide to attach to. These linkers could then be cut using a strong acid to produce free peptide. Now, we are working on producing peptide that grows directly off of the polymer bead, so it cannot be cleaved. Using these beads in a column, we could purify biologic. When biologic is entered into the top of the column, it would be captured by the column through the peptides. We could then elute off the biologic.
To prep for this experiment, I inserted seven specific amino acid sequences into the peptide machine program, copying each 13 times. After inserting the sequences, I used the program to see what volume of each amino acid prepped amino acids for the peptide machine. I then rounded those values up to either 5, 10, or 15 mL of solution. After determining the amount of each solution we would need, I calculated the gram amount of amino acid and mL amount of NMT to be added to make each solution. I then labeled each amino acid tube and added the specified grams of amino acid to each.
We will use the amino acid solutions that I made today to produce the seven specified peptides that I entered into the program. I look forward to returning to RPI next Friday to see what progress has been made!
To prep for this experiment, I inserted seven specific amino acid sequences into the peptide machine program, copying each 13 times. After inserting the sequences, I used the program to see what volume of each amino acid prepped amino acids for the peptide machine. I then rounded those values up to either 5, 10, or 15 mL of solution. After determining the amount of each solution we would need, I calculated the gram amount of amino acid and mL amount of NMT to be added to make each solution. I then labeled each amino acid tube and added the specified grams of amino acid to each.
We will use the amino acid solutions that I made today to produce the seven specified peptides that I entered into the program. I look forward to returning to RPI next Friday to see what progress has been made!
Sunday, October 13, 2013
HCP Assay
On Friday (October 11th), I had a variety of different tasks at RPI. My main task was to help Doug with an host cell protein (HCP) assay. In this experiment, we plan to analyze the binding of HCP to a peptide array using high-thoroughput screening of fluorescent tags. To do so, the peptide arrays will be incubated with HCP solution, primary antibody, and secondary antibody, separated by thorough washings to remove the excess solution. We will be testing different concentrations of primary antibody. The secondary antibody is fluorescently tagged, so we can screen the slides for intensity, indicating the intensity of HCP bound by each peptide.
Previously, Doug printed peptides arrays onto three different slides and coated them with an HCP solution to bind the peptides. My first job was to wash the HCP solution off of the slides. This washing involves pouring off the solution from the slide and adding 10 mL of PBS to the petri dish, then rotating them for 10 minutes to wash off any excess HCP that was not bound by the peptide array. We then made the different concentrations of primary antibody and applied them to the respective slides, making sure all of the solution stayed on the slide and was evenly distributed. After an hour of incubation, we then washed the primary antibody and applied the light-sensitive secondary antibody.
While we were waiting for the primary antibody incubation, we were going to make western transfer buffer, but we did not have enough methanol. Instead, we made 2L of PBS, which is a process I have done before!
I am excited to return to RPI and see the results of this experiment, but I will unfortunately not be able to go this Friday (October 18th) due to Parents' Day.
![]() |
Figure by JP Trasatti, Karande Lab, RPI |
Previously, Doug printed peptides arrays onto three different slides and coated them with an HCP solution to bind the peptides. My first job was to wash the HCP solution off of the slides. This washing involves pouring off the solution from the slide and adding 10 mL of PBS to the petri dish, then rotating them for 10 minutes to wash off any excess HCP that was not bound by the peptide array. We then made the different concentrations of primary antibody and applied them to the respective slides, making sure all of the solution stayed on the slide and was evenly distributed. After an hour of incubation, we then washed the primary antibody and applied the light-sensitive secondary antibody.
While we were waiting for the primary antibody incubation, we were going to make western transfer buffer, but we did not have enough methanol. Instead, we made 2L of PBS, which is a process I have done before!
I am excited to return to RPI and see the results of this experiment, but I will unfortunately not be able to go this Friday (October 18th) due to Parents' Day.
Sunday, October 6, 2013
Host Cell Proteins
On Friday (October 4th), I finally returned to RPI after missing a week! I worked with Doug (one of the undergraduates in my lab) on a project he is working on involving host cell proteins. A couple weeks ago, these host cell proteins were involved in the SDS-PAGE gel we were working on!
Previously, we have only had intensity data to analyze the amount of host cell protein (HCP) that is bound by different peptides. This is relative data, so it does not give us information about the actual amount of HCP that binds. Ideally, we want to find a peptide that has a high affinity for the target protein we are looking to purify, but low intensity of HCP (the green square in the graph below). We do not want the result to be in the red square, indicating high affinity for the target protein, but high intensity of HCP.
Doug is working to quantify the amount of HCP that is bound by the peptides. To do so, he is printing different concentrations of HCP on nitrocellulose (negatively charged paper) 3X5 microarrays. These spots of known HCP amount will then be analyzed for intensity to determine a standard curve. The standard curve will then be used to determine the unknown (amount of HCP) for the peptide data.
My first job was to pipette 80 microliters of 6 different concentrations of HCP into their specified positions in the printer well-plate. We then cut the nitrocellulose paper into slide-shaped pieces and taped them onto the printer so they wouldn't move during printing. When we were setting the heights for the printer needle, one of the pieces of nitrocellulose cracked, so we had to untape and redo the nitrocellulose. Once we finally had the printer set up, we set the first round of printing to run, and we found that the middle spot on the second slide was not printing. We then reconfigured the needle heights and made each height tighter to the nitrocellulose. As we continued to run the machine, we realized that the needle was popping up every time it went to print on the first slide, so we had to tap the needle down every time it was positioned on the first slide. Because the printing was taking so long, we decided to only do 10 runs instead of the original 20 runs planned. Even the 10 runs took over 2 hours to complete!
I look forward to returning to RPI next week to see what data they collected from this experiment!
Previously, we have only had intensity data to analyze the amount of host cell protein (HCP) that is bound by different peptides. This is relative data, so it does not give us information about the actual amount of HCP that binds. Ideally, we want to find a peptide that has a high affinity for the target protein we are looking to purify, but low intensity of HCP (the green square in the graph below). We do not want the result to be in the red square, indicating high affinity for the target protein, but high intensity of HCP.
Doug is working to quantify the amount of HCP that is bound by the peptides. To do so, he is printing different concentrations of HCP on nitrocellulose (negatively charged paper) 3X5 microarrays. These spots of known HCP amount will then be analyzed for intensity to determine a standard curve. The standard curve will then be used to determine the unknown (amount of HCP) for the peptide data.
My first job was to pipette 80 microliters of 6 different concentrations of HCP into their specified positions in the printer well-plate. We then cut the nitrocellulose paper into slide-shaped pieces and taped them onto the printer so they wouldn't move during printing. When we were setting the heights for the printer needle, one of the pieces of nitrocellulose cracked, so we had to untape and redo the nitrocellulose. Once we finally had the printer set up, we set the first round of printing to run, and we found that the middle spot on the second slide was not printing. We then reconfigured the needle heights and made each height tighter to the nitrocellulose. As we continued to run the machine, we realized that the needle was popping up every time it went to print on the first slide, so we had to tap the needle down every time it was positioned on the first slide. Because the printing was taking so long, we decided to only do 10 runs instead of the original 20 runs planned. Even the 10 runs took over 2 hours to complete!
I look forward to returning to RPI next week to see what data they collected from this experiment!
Tuesday, September 24, 2013
More Amino Acids
Last Friday (September 20th), I returned to RPI for more lab work. I had a shorter day today because I had to leave early to go to Massachusetts for a horse show. This week, I continued to work on preparing amino acid solutions. I finished the rest of the 20 amino acids that I started to make last week! These amino acid solutions will eventually be used in microarray experiments.
I will not be able to go to RPI next Friday because I am participating in a program at Cornell, but I'll look forward to returning on October 4th!!
I will not be able to go to RPI next Friday because I am participating in a program at Cornell, but I'll look forward to returning on October 4th!!
Saturday, September 14, 2013
Data Preparation and Amino Acids
Yesterday (September 13th), I went over to RPI for more work in the lab. I got to see the gel I prepared last week! Unfortunately, the gel turned out useless because it had too many metal ions which caused there to be what was basically an extended row of dye instead of separated bars. The gel was still helpful, though, because now we will not attempt any more gels with those types of solutions!
My first job for the day was to compile some of the data from our previous Claudin-5 microarrays. For each of our past experiments, we have pixel intensity data for the resulting microarrays. At this point, we want to compile all of the past data to be able to look at it as a whole. On JP's computer, each past experiment has it's own folder, so I copied an .mev converter file into each folder. After doing so, I opened the .mev converter file and all of the .mev files for a certain experiment, and I moved all of the data into the converter file. After doing so for every experiment, the .mev converters were ready to run, and they will be run by next week!
My second job was to prepare tubes to make amino acid solutions. I labeled each tube:
Three-letter code (one-letter code)
Mg amino acid to be added
mL solution to be added
After doing this for all 20 amino acids, I then began to add the specified amount of each amino acid to its specified tube.
I can't wait to continue my research next Friday!!
My first job for the day was to compile some of the data from our previous Claudin-5 microarrays. For each of our past experiments, we have pixel intensity data for the resulting microarrays. At this point, we want to compile all of the past data to be able to look at it as a whole. On JP's computer, each past experiment has it's own folder, so I copied an .mev converter file into each folder. After doing so, I opened the .mev converter file and all of the .mev files for a certain experiment, and I moved all of the data into the converter file. After doing so for every experiment, the .mev converters were ready to run, and they will be run by next week!
My second job was to prepare tubes to make amino acid solutions. I labeled each tube:
Three-letter code (one-letter code)
Mg amino acid to be added
mL solution to be added
After doing this for all 20 amino acids, I then began to add the specified amount of each amino acid to its specified tube.
I can't wait to continue my research next Friday!!
Subscribe to:
Posts (Atom)