Monday, December 23, 2013

News from conferences

Following the last post. Here is great list of recent reports in protein science and engineering field.

Raising genes from the Last Common Ancestor demonstrates its complexity

I do not why but papers about evolution always fascinate me (if they are not overwhelmed with population genetics and other hard core stats and math). This time I cam across a paper in JACS where authors used a computational technique called Ancestral sequence reconstruction (ASR) to rebuilt  ancient enzymes that nowadays form closely related bi-enzymatic complex. Basically, knowing the phylogeny (map of how species relates to each other) let you estimate the probability (maximum likelihood) that some mutation would occur back in its evolutionary history or vice versa: you can estimate the probability that current phylogeny would evolve given that ancestral protein sequence. Exactingly, you can go as back as you can (means - to the very Last Common organism Ancestor that we all evolved from, LUCA). In order to get that far the wider phylogeny tree you have the better: say from the most distant archaea to the bacterias. Authors asked a question whether LUCA back than 3.5 billion years ago had elaborate enzymatic networks. For that reason they rebuilt bieznyme complex (cyclase subunit HisF and the glutaminase subunit HisH) of imidazole glycerol phosphate synthase. Note that, without one another subunit would not work due to their close relationship in the synthesis reaction. They were able to show that the reconstructed proteins still retain almost the same specific activity and surprisingly able to tightly associate with each other. This work demonstrates that even LUCA had enzymes that were very closely associated and were able to perform such things as substrate tunneling (when product of one reaction is directly passed to another enzyme) or allosteric regulation (when product of one reaction regulates another enzyme). I wonder how would ribosome would look like given its high evolution conservativety.

The reason I decided to make a post about this paper is that ASR technique nowadays on the realm of very cheap gene synthesis let us 'play' with protein sequences such that we can go back in evolution and make probably more promiscuous enzyme that we can more easily 'teach' to perform reaction we want as these. Alternatively, we can improve folding properties of our protein by more targeted mutagenesis (since we have a good guess about its evolutionary history). Also, we might be able to produce an orthogonal protein/protein networks that still retains the specific activity while being not regulated by intracellular proteins. Any other ideas?

Sunday, December 22, 2013

Building chemical nanoreactors out of proteins

Hi there! Today's short post is about usage of proteins as a generic scaffolds for designing chemical reactors.

Many good chemistries can not be simply accomplished in the tube because in order to proceed these reactions require very special conditions, such as presence of a metal in a certain oxidized state, or whole reaction should be shielded from the water solution since very unstable intermediate complex is formed during the reaction path. Thus having a nanoreactors with controllable conditions is a target of many chemists nowadays. There were numerous attempts to build such things out of complex organic molecules, however every single one needs a special approach and therefore lot's of effort. In contrast, mother nature successfully solved this problem (and keeps solving it) with proteins - the most generic chemical reactor. The reactions centers of many enzymes provide special conditions such as high hydrophobicity (lack of water molecules), or positioning of these water molecules in a reaction-favorable places. Upon finishing the reactions, active site of a protein will be freed from the product due to the special its characteristics. Thus some of the enzymes are able to perform reaction up to million times per second (turnover rate for carboanhydrase is half a million!) - much faster than chemistry in homogeneous environment would allow.

This time a group from University of Basel under the leadership of Professor Nico Bruns used a  protein that normally helps other proteins to fold (chaperonin) as a nanoreactor for polymerization. These sort of proteins form hydrophobic pores that are large enough to let macromolecules enter and leave it. Thus authors conjectured that this would be a perfect scaffold for assisting polymerization reaction. They simply modified chaperonin mutant cysteine residue with EDTA-like compound what let the catalytic Copper ion to be trapped inside the cavity, whereas monomers were allowed to enter the pore by diffusion. As a result they were able to obtain polymer with very low polydispercity index. 

Another example, probably less successful (that's why it is in 'Chembiochem', not in the 'Angewandte chemie' where previous paper appeared) but still interesting. Group of chemists under the supervison of developed by Prof. Peter G. Schultz). Then they coupled a number of BCN-linked organic ligands to the protein via Strain Promoted Azyde-Alkyne Cycloadition (SPAAC). Ligands in turn could form a complex with metals such as Rhodium, Manganese and Copper. Although authors could not reach anticipated velocities of some of the reactions they still were able to demonstrate the possibility to build such artificial metallo-enzymes. May be use of other protein scaffolds (such as mentioned above chaperonin) with  computational design aid (or probably directed evolution) could help us to get very active enzymes for biotech and pharma industry in future.

Tuesday, December 17, 2013

Directed evolution can be done faster and more effectively

Artificial (or directed) evolution experiments were for a long time to me some kind of easy to understand concept. It's clear indeed, you take a gene (resembles a genotype), mutate it or part of it in preferentially random way, express it (to obtain a phenotype) and choose ones that match you criteria. This process you can do again and again until you exhaust your passion.
In a huge contrast to the theoretical concept, things become much more hazy once you start designing directed evolution setup (I have never done it myself). Here might come problems: size of the library (the bigger the better, if you can experimentally afford it), how much time you can spend on doing this exercise (otherwise, how long each evolutionary round will take) and you also be considering possible biases that comes from the experimental system you use. For instance, in vitro systems that normally compartmentalize a single gene and its protein product  are not self-sustaining (see my another post how researchers are trying to make this system sustainable). Whereas using in vivo evolution methods you'll be applying a selective pressure on a whole organism rather than only on you gene of interest so you might end up with more sophisticated pattern of unintentional selective pressures. The latter for instance affects the most advanced method 'phage-assisted continuous evolution' (PACE, to read about in Nature journal or on HHMI). It is great in terms of its ease. It relies on a phage M13 propagation (this makes very short turnover time) and its all happen in one constantly replenished medium (so it continuous!) Also, due to the fast phage proliferation it will let you pick even the most rare but very efficient mutants of you gene.



Here are the E.coli cell infected with M13 phage. It lacks one important protein (pIII) who's gene is on the extra plasmid (red circle). You transform this plasmid along with a plasmid bearing gene of interest (green circle) that will drive pIII expression and will result in maturation of infection-competent phage.


Again, this system is limited since the obtaining of you most precious active mutant is dependent on the phage replication who's biochemistry is totally irrelevant to you.

So, it took only two years to design another directed evolution system that almost fully lacks this drawback, though it has some others (will tell later about this). It's been published in recent issue of Nature biotechnology journal. So, instead of using phage as a 'machine' to amplify the protein mutants group of Professor Andrew Ellington (here is the link to his lab) simply coupled production of your gene to the synthesis of Taq DNA polymerase that we normally use in PCR (they call it 'compartmentalized partnered replication' CPR, don't get confused). The design looks very similar to the PACE:



The only difference is that your drive the expression of the enzyme only (not a phage). Next, you insert you cell into vesicles along with primers and simply do a PCR. Thus, PCR will occur on those vesicles containing larger number of Taq polymerase molecules. Again, similarly to the PACE we dealing with amplification that lets you to pick up the most rare mutants. The problem here that it is discontinuous. Although, authors consider this beneficial as it ads another leverage to evolutionary process. Also, production of your library is still dependent on an organism that means you will not be able to produce toxic to E.coli proteins (this might be an issue only if production of Taq-pol requires heaps of you protein).  Thinking of both systems,  I might be not right,  but they do not have negative selection that may a be a problem as well. Anyway, the CPR system due it simplicity might be used  for adjusting not only a single gene, but rather a whole circuit. A big step ahead for Synthetic Biology!

Wednesday, December 11, 2013

Map of Synthetic biology inventors, companies etc

Hi there!
Do you know that there is a map that tracks the field of Synthetic biology worldwide? Apparently it's been released on 2009 and still growing. You can find it here.

One also can submit an inquire to be loaded into the database, or change someone else's entry. Would be great if map is able to update itself. I'm sure the database will be a way larger and more representative. At the moment due to the lack of popularity picture might be a little skewed. Anyway, now you know that such thing exists and you're probably already typing name of your institution via this link.

Tuesday, December 10, 2013

Darwinian evolution in a primitive cell

Hi all. I feel totally ashamed because of my poor blog performance. I simply could not find a single hour to write a short review of papers that I have recently read or skimmed through.

This time I would like to discuss a recent paper of Professor Tetsuya Yomo from Osaka University. In this paper authors have established a platform for artificial evolution of systems containing of multiple components. As an object for selection they have chosen RNA directed RNA polymerase of Qbeta phage (Qbeta polymerase). This protein is known as the most productive RNA-polymerase and according to previous reports can produce up to 1010 copies of a single RNA template in 10 min. However, there are a major problems with this enzyme: (1) it's incredibly specific and can amplify only phage RNAs; and (2) it is often contaminated with phage RNA-derived small parasitic RNAs that effectively outcompete replication of genomic RNA. Thus whoever can 'teach' this protein to replicate RNA molecules of our interest that also will be resistible to the presence of parasitic RNAs can build (to say the least) very efficient way to amplify RNA molecules. 

What's more important you can design an artificial cell that will be contained of many components who's RNA templates will be replicated by Qbeta polymerase. Further application will be really dependent on you imagination. Firstly, you can explore the very basic principles of evolution of complex per-biological systems. Thinking of more practical and immediate use: one can design a self-sustained cell-free translation system, that will only be consisted of proteins and RNAs that you need and properties you dare to want. 
Alternatively, even without approaching such a tantalizing goal, you can perform an artificial evolution experiments of the systems of proteins. This was not possible so far and the protein engineering exercises have been done only for single relatively 'simple' protein species.

So what group of Professor Tetsuya Yomo have done? They simply overcome the problem of presence of parasitic RNAs by compartmentalizing the replication reaction in lipid vesicles. Such that non-functional RNAs that were prevailing in the replication mix were to a major extent eliminated during evolution iterations. Also, in order to be able to keep the mutant RNAs and Qbeta replicase isolated from the others mutants they fused the RNA replication vesicles with vesicles that contained E.coli translation extract (so called PURE system).

This compartmentalization approach reminded me a story of one of the prisoner's dilemma variants that was explained in 'Supercooperators' book, when 'defectors' that use resources of others without giving back normally outcompete 'cooperators' in a homogeneous environment but as soon as group of interacting 'units' become isolated - cooperators thrive. This, for instance, was a good way to show the importance of a group selection in evolution.

Although the authors have not reached the ultimate goal to have omni-reactive Qbeta polymerase (in fact they have not even mentioned this in the paper) they were able to get mutant polymerase that was efficiently replicating its RNA long template (mutant again) even in the presence of parasitic RNAs. That means, that authors, are well on the way to design a replicase that will be able amplify RNAs containing genes of our, or in fact, their interest.

Interestingly, that evolution of such primitive cell (as they call it) which is not alive demonstrated traits of Darwinian (i.e. biological) evolution. These are 'diminishing returns' (where the smaller benefits can be gained per mutation upon reaching the optimum) and more importantly 'rate of mutations' happened to be constant throughout all evolutionary experiment. The same have recently been shown for E.coli cells (living matter!). I wonder if this rate is close to the optimum of viruses or relatively simple bacterias.