Wednesday, March 19, 2008

Molecular Biology Current Innovations and Future Trends

One could be led to believe that a molecular biologist armed with a copy of 'Maniatis', or one of the 'Current Protocols' publications, would have adequate technical support to successfully accomplish most experimental procedures. In the real laboratory world, we know that even established methodology is adapting and changing at an alarming rate and that new experimental approaches are regularly appearing on the horizon. This small book fills an important niche in the market, for it aims, and I believe succeeds, in bringing the reader up to date with recent innovations in established techniques as well as introducing us to more state of the art methodology.

The book contains ten chapters, all written by experts in the particular fields and interestingly, the editors have recruited over half the authors from the commercial sector. These contributions tend to bias their chapters towards products available from their particular companies, although in general they seem to have covered their subjects fairly comprehensively. Each chapter covers a review of the technique, concentrating on recent innovations and then discusses likely future trends. Most chapters end with protocols covering recent advances or more specialised approaches. Each chapter is also accompanied by an extensive list of references, in most cases concentrating on papers published in the last five years. All chapters refer to material published last year, which is a good indication that the editors and the publisher have succeeded in bringing this book to the bookshelves without undue delay.

The first chapter covers general PCR techniques and is written by a group of authors from Stratagene. In addition to covering recent advances in PCR methodology and instrumentation, the authors describe specific techniques such as cloning PCR-generated fragments and using PCR for site-directed mutagenesis. Sadly the accompanying figures are black and white copies of coloured diagrams from the company¹s catalogue and some of the detail has been lost during reproduction. A specific utilisation of PCR, thermal cycle sequencing, is described in the next chapter, which contains a generalised protocol for the technique. This is followed by a chapter devoted to methods for isolating plasmid DNA from mini-preps using silica-based resins. Whilst there are a profusion of commercial kits available, the author very rightly draws attention to the dangers of total reliance on these products and so presents a very extensive protocol utilising common laboratory reagents and equipment.

Electrophoresis is covered by three chapters, the first by Branko Kozulic, who provides a very readable account of recent theories which attempt to explain electrophoretic phenomena, including his own Œdoor-corridor¹ model. He also provides a tantalizing glimpse into the world of new gel matrices and intercalating dyes. The second chapter is devoted to pulsed field gel electrophoresis (PFGE) in which the authors review the various aspects of the technique and provide protocols for the preparation of high molecular weight DNA from soya bean leaves and provide physical mapping data from PFGE combined with two dimensional electrophoresis. The other chapter describes capillary electrophoresis (CE) as applied to the isoelectric focusing of proteins and provides an extensive protocol and a troubleshooting chart.

A chapter on subtractive hybridisation describes the use of commercially available multipurpose cloning vectors to perform cDNA subtractive hybridisation between biotinylated RNA and single stranded DNA. The unhybridised product is purified by streptavidin and used for transformation. This technique should appeal to researchers involved in gene expression and developmental studies.

The widespread use of PCR in molecular biology has required the simultaneous development of reliable methods for the production of oligo primers. A chapter describes recent developments in the related field of oligoribonucleotide synthesis. The demand for synthesized RNA is likely to increase as interest in antisense RNA and the possible use of ribozymes in gene therapy intensifies.

Finally, there are two interesting chapters on instrumentation. One describes state of the art devices for automated DNA hybridization and detection and the other is devoted to a relatively new technique called matrix assisted laser desorption ionization mass spectrometry (MALDI). The authors speculate that MALDI will, in the not too distant future, replace gel electrophoresis in the analysis of DNA sequencing reactions.

This modestly priced book provides the molecular biologist with a wealth of current information on a wide variety of essential techniques. I look forward to the publication of volume 2 in this series, later this year.

http://www.horizonpress.com/hsp/revs/revs1mb.html


The Pioneers of Molecular Biology: David Baltimore

With his first experiment on the subject, he shattered existing theories of DNA and RNA function

It was not until the late 1950s that David Baltimore was even aware of the discovery that would change his life. "I was in high school when the Watson-Crick paper was published," he says, "but my teacher never mentioned it, nor did my parents, who were not particularly literate in science." At Swarthmore College, too, no one on the faculty ever talked about DNA. But as an upperclassman Baltimore majored in chemistry and began reading science journals, where he was introduced to the double helix. "I was transformed," he says. "I saw the edifice of molecular biology beginning to appear before me and decided that this was what I was going to spend the rest of my life working on."

Baltimore opted for the study of tumor viruses, fully aware of the so-called central dogma that double stranded DNA transfers genetic information to single-stranded RNA, but that information never flows the other way. One scientist, however, Howard Temin, had earlier hypothesized that RNA-DNA transfer could occur, and in 1970 Baltimore set out to prove him right. Assuming that the accepted wisdom was wrong was easy, he says. "I was trained in chemistry and saw it as a chemical problem."

Baltimore shattered the dogma with his very first experiment. He discovered the enzyme, now called reverse transcriptase, that enables a retrovirus to transfer information from RNA to DNA. The implications were enormous; they suggested that a virus could infiltrate a cell's DNA and turn itself into a gene. The enzyme also turned out to be a powerful tool for probing DNA for individual genes, including the oncogenes that cause cancer. Indeed, his discovery was instrumental in development of the entire field of biotechnology.

Having loosed the genie from the bottle, Baltimore became concerned about the helter-skelter transfer of genes from one organism to another. He feared that putting entire viruses into bacteria, for example, might lead to bacteria spreading a viral disease. Fanciful stories in the press spoke darkly of creation of a "Doomsday Bug."


Concerned, Baltimore and Stanford's Paul Berg organized a conference at Asilomar, on California's Monterey peninsula. There scientists in the field agreed to a voluntary moratorium on certain kinds of biotechnology experiments and containment safeguards on others until the experiments were proven safe. "As far as we know," says Baltimore, "it was absolutely observed by everyone in the community." In retrospect, he believes the Asilomar scientists erred on the side of caution.

Realizing that the new technology might well provide a tool for fighting cancer, Baltimore converted his lab to the study of cancer viruses. Today, as president of CalTech, he's increasingly involved in research on the AIDS virus. In a way he's come full circle. HIV, like the subject of his historic experiment, is a retrovirus.

http://www.time.com/time/covers/1101030217/scdprofile1.html

The Future of Molecular Biology

The discovery, in 1953, of the double-helical structure of DNA sparked a revolution in the biological sciences, the full impact of which is only now beginning to be appreciated. The structure immediately suggested how genetic information might be replicated and soon led to the deciphering of the genetic code. Together with the discovery of methods for determining the amino acid sequences of proteins and for sequencing DNA and RNA, it paved the way for emergence of recombinant DNA technology that, in turn, spawned the biotechnology industry and rapidly transformed the fields of molecular biology and molecular genetics that today inform almost every aspect of biomedical science.

Given the quite extraordinary progress of the past forty years, what might we look forward to as we approach the twenty-first century? Launched in the 1980s, the international genome initiative has as its stated goal the cloning and sequencing of the 50,000 or more genes that constitute the human genome and, concurrently, the genomes of a number of other organisms, including the plan aradopsis, yeast, the nematode worm, the fruit fly, and the mouse. Clinical medicine will undoubtedly be the principal beneficiary of this work. Already well over one thousand disease-related genes have been identified, often leading, as in the case of cystic fibrosis, phenylketonuria, muscular dystrophy, and colon cancer, to new DNA-based diagnostic procedures and beginning attempts at gene therapy.

The general field of developmental biology is likely to be the other major beneficiary of the advances in molecular biology and genetics. With the new tools that are now available, rapid progress is being made in elucidating the molecular mechanisms involved in such processes as gametogenesis, fertilization, the establishment of body-plan, differential gene expression and cell-fate determination, and the control of cell proliferation and cell death. The availability of techniques for transferring genes from one organism to another or eliminating specific genes by homologous recombination is contributing to the rapid progress in our understanding of the development of even very complex systems such as the mammalian hematopoietic and immune systems.

In the long term, the greatest challenge remaining to biologists is to understand how the human brain works. Every aspect of human behavior-including our ability to perceive the world around us, to carry out appropriate motor acts, to speak and understand written or spoken language, and to feel and to express our emotions-is due to the integrated actions of the nerve cells in our brains. We are far from understanding how such high-level functions "emerge" from such low-level activities as the conduction of nerve impulses and their transmission at synaptic junctions. The "mind-brain" problem, which for centuries has been exclusively the domain of philosophers and theologians, is now awaiting the concerted efforts of research biologists. Its elucidation will undoubtedly be the greatest triumph of the human intellect and as revolutionary (and with as many societal consequences) as the discovery of evolution by natural selection in the mid-19th century and of the nature of genes in the mid-20th century.

http://highered.mcgraw-hill.com/sites/0073031216/student_view0/exercise6/the_future_of_molecular_bio_.html

Gene therapy revisited

In spite of problems and drawbacks, gene therapy moves forward

Since the death of Jesse Gelsinger 2 years ago during clinical trials at the University of Pennsylvania, gene therapy has maintained a low profile and receded from the public eye. But away from the headlines, researchers are, in fact, quietly making progress and are confident that, within the next decade, gene transfer will be elevated from its current experimental status to a therapeutic modality.

At the recent Emerging Technologies in Gene/Drug Therapy and Molecular Biology meeting, sponsored by the Regulon

Away from the headlines, researchers are quietly making progress with gene transfer
company (Mountain View) and the International Society of Gene Therapy and Molecular Biology, a group of about 100 scientists gathered in Corfu, Greece, to discuss advances in bringing gene therapy to the clinic. They still face major challenges: targeting the right gene to the right location in the right cells and expressing it at the right time, all while minimising any adverse reactions. But the scientists presented data on the development of viral and non-viral gene vectors, tissue- and disease-specific gene delivery and cell cycle control that indicate that the clinical use of gene transfer is becoming a tangible possibility.

http://www.nature.com/embor/journal/v2/n12/full/embor261.html

The future of gene therapy

What is the Human Genome Project? How will it help our understanding of biology?

The Human Genome Project was an international consortium that set out to sequence the whole genome. Everyone's genome varies, but only very slightly. You and I, despite coming from different parts of the world, are mostly alike. Most of our base-pass or code is identical. We probably vary in about 1 per cent of our genome. So, if you sequence just one person, you get 99 per cent of the information about the human genome. A small part of the basic human genome is yet to be decoded. But once done, you have something like the periodic table for chemistry. It is like how it was 100 years ago in the case of chemistry. According to me, biology is 100 years behind the physical sciences in terms of its basic understanding. The genome project is one of the major developments that would help bring rapid progress.

According to reports on the genome project, only mono-gene disorders have been sorted out. How will this help in our understanding of the causes and treatment of disorders arising from a single gene?

You are right. Only simple, mono-gene disorders have been sorted out. But even that is a major development. Such genes act in the family — either you have it or you don't. If you don't, you will not get the disease and if you have the gene, you will get it. Huntington Disease, for instance, comes under this category. It is a dominant disorder coming through the generations. The genome project helps find these diseases very quickly.

From the genome project results, how does one go about finding out the existence of the gene that causes a mono-gene disorder?

You take a family with, say, 20 affected individuals and have markers scattered throughout the genome. You then apply the markers to the family and look for the marker that segregates the disease. Everyone in the family who could get the disease will have one type of marker and those who would not get it, another type of marker. Suppose you apply the markers and those who would not get the disease have the 200th marker and those who could get the disease have the 201st marker, then you know that wherever the 201st marker is must be close to the diseased gene. You then go to the database provided by the genome project, in the computer, and find out what genes are from the region where the 201st marker is. And let us say that you get a list of 20 genes. You then find out which ones of those are expressed in the central nervous system. Say, 10 of them are, and suppose you already know that five of them cause some other disease, then you are left with only five genes, which you then sequence to find out which one of those causes the disease. This is the process by which you identify very rapidly a single gene responsible for a particular disease.

How was this done prior to the genome project?

It was done by a method called `linkage'. Once you know which chromosome the gene causing the disease was on and the marker associated with it, then you had to sequence the DNA [deoxyribonucleic acid] yourself. To do that you had to clone it all and it was a huge task. Now it has been done for you. You just have to find the mutation.

So, do we now have the facility to sort out all single-gene disorders?

In the next five to 10 years almost all disorders caused by a single gene will be sorted out. This is no mean achievement.

What are the dominant mono-gene disorders that are to be sorted out by the genome project?

Huntington, some forms of Alzheimer's, epilepsy and Parkinson's disease and a lot of muscle diseases including muscular dystrophy. There is a long list of rare diseases.

We make up about 30,000 genes; over half of them are expected to be in the central nervous system. Random mutations go on across those genes. So, over half of the diseases that may occur are going to be neurological. It is thus not surprising that the long list of genetic diseases would express itself on the nervous system.

What are poly-gene or complex disorders? Is there a possibility that they will be sorted out in the near future?

For a single-gene disorder, everyone within a family who has a particular genetic disease is likely to have the same genetic abnormality as there is a very strong genetic factor that is causing the disease. This is easy to find out as it stands out.

But take, for example, epilepsy, which is mostly not transmitted through generations. You may just have one or two people in a family with epilepsy and that does not give you enough information. The disease may be a result of a complex interplay of genetic and environmental factors. Thus in the case of complex disorders it is just not the relative abnormality in the genes that causes the disease. By itself the gene does not tell you anything. It is a combination of genes and environmental factors that causes complex disorders such as epilepsy.

To find out the cause of such diseases it is not enough to study one person or a few families, you need to study hundreds of people as you cannot separate them to start with as you do not know what comparisons to make. It is thus best to start the study with a large population and do the mapping. Then go back and say that this type of epilepsy is mostly because of these factors and so on. Even in this case we are only guessing. But, surely, a homogenous approach where one lumps them to start with and splits them later is good. In the past, what was done was to split patients into disease categories and then say you have got this or that type of epilepsy. There is some basis in that but I think one should not get too fixed on that.

Are there genetic differences across ethnic groups? And would that make the identification of genetic disorders easier?

Yes, undoubtedly there are ethnic variations in the genetic make-up. Some common diseases vary in a particular frequency throughout the world. For example, in Singapore, brain haemorrhage, a common cause for stroke, is more common than in the West. The reason for that is not very clear as yet. It may be because of differences in diet, environment and so on. But, as is being increasingly found out, it is to a large extent genetically driven. You will have to take into account what the frequencies of the disease are in different populations. Alzheimer's is a big problem in the United Kingdom. But in some other parts of the world, where the life expectancy at birth is low, people die before they can even get it. Thus, there are diseases such as Alzheimer's, Parkinson's and stroke, as also cancer, that are becoming major problems because people are living longer now.

http://www.hinduonnet.com/thehindu/fline/fl2003/stories/20030214001208400.htm

How DNA Computers Will Work

Even as you read this article, computer chip manufacturers are furiously racing to make the next microprocessor that will topple speed records. Sooner or later, though, this competition is bound to hit a wall. Microprocessors made of silicon will eventually reach their limits of speed and miniaturization. Chip makers need a new material to produce faster computing speeds.

You won't believe where scientists have found the new material they need to build the next generation of microprocessors. Millions of natural supercomputers exist inside living organisms, including your body. DNA (deoxyribonucleic acid) molecules, the material our genes are made of, have the potential to perform calculations many times faster than the world's most powerful human-built computers. DNA might one day be integrated into a computer chip to create a so-called biochip that will push computers even faster. DNA molecules have already been harnessed to perform complex mathematical problems.

While still in their infancy, DNA computers will be capable of storing billions of times more data than your personal computer. In this article, you'll learn how scientists are using genetic material to create nano-computers that might take the place of silicon-based computers in the next decade.

Surpassing Silicon?
Although DNA computers haven't overtaken silicon-based microprocessors, researchers have made some progress in using genetic code for computation. In 2003, Israeli scientists demonstrated a limited, but functioning, DNA computer. You can read more about it at National Geographic.

http://computer.howstuffworks.com/dna-computer.htm

DNA computing

DNA computing is a form of computing which uses DNA, biochemistry and molecular biology, instead of the traditional silicon-based computer technologies. DNA computing, or, more generally, molecular computing, is a fast developing interdisciplinary area. R&D in this area concerns theory, experiments and applications of DNA computing.

History

This field was initially developed by Leonard Adleman of the University of Southern California, in 1994[1] . Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have been made and various Turing machines have been proven to be constructible[2] [3].

In 2002, researchers from the Weizmann Institute of Science in Rehovot, Israel, unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips. The computer could perform 330 trillion operations per second, more than 100,000 times the speed of the fastest PC [4]. On April 28, 2004, Ehud Shapiro, Yaakov Benenson, Binyamin Gil, Uri Ben-Dor, and Rivka Adar at the Weizmann Institute announced in the journal Nature that they had constructed a DNA computer[5]. This was coupled with an input and output module and is capable of diagnosing cancerous activity within a cell, and then releasing an anti-cancer drug upon diagnosis.


Capabilities

DNA computing is fundamentally similar to parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once.

For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. But DNA computing does not provide any new capabilities from the standpoint of computational complexity theory, the study of which computational problems are difficult to solve. For example, problems which grow exponentially with the size of the problem (EXPSPACE problems) on von Neumann machines still grow exponentially with the size of the problem on DNA machines. For very large EXPSPACE problems, the amount of DNA required is too large to be practical. (Quantum computing, on the other hand, does provide some interesting new capabilities).

DNA computing overlaps with, but is distinct from, DNA nanotechnology. The latter uses the specificity of Watson-Crick basepairing and other DNA properties to make novel structures out of DNA. These structures can be used for DNA computing, but they do not have to be. Additionally, DNA computing can be done without using the types of molecules made possible by DNA nanotechnology (as the above examples show).

Examples of DNA computing


http://en.wikipedia.org/wiki/DNA_computing

The Future of DNA

Genetic engineering is increasingly becoming part of our daily lives.
For instance, the food processing industry depends on it to a large
extent and many modern diagnostic tests in medicine are based on methods
derived from DNA technology. Along with these advances, the public is
becoming more aware of the enormous potential of the technology, as well
as the ethical and social issues related to it. Thus, scientific views
about DNA and genes challenge our fundamental concepts about life,
nature, society and humanity.

The public debate about genetic engineering is based on a paradigm that
seems to be widely accepted by scientists, as well as by laymen. It is
the paradigm of reductionist biology, which postulates that all
attributes and characters of life in its substance and form are
ultimately determined by genes. Other factors like the natural and the
social environment are recognised as being only of secondary importance.

There are however other possible approaches to an understanding of life.
Some of them stress the contextual and relational qualities of organisms
and consider them to be the basic cause rather than the consequence of
molecular interactions at the genetic, i.e. the DNA, level. They
acknowledge that every living being is endowed with its own dynamics,
sustained by the interaction of both environment and genes. But
approaches to understanding life that encompass genetic determinism are
also conceivable. Indeed, molecular biological discoveries themselves
prompt us to search for such approaches.

Such a search would be of value not only to philosophers of science or
epistemologists, but also to all those concerned with biological science
and its application. From the outset, our concepts and ideas shape our
perceptions of the world and determine our actions. Thus, ethical or
moral values necessarily reflect our scientific outlook on the world.

Some initial questions related to the scientific and social aspects of
genetic engineering can be identified: Where does the power of this
technology originate from? What characters and properties of living
beings does it unravel? Where and how does it come up against
limitations?

A second group of issues relates to the presuppositions of DNA thinking.
The success of molecular biology often hides the fact that its
scientific and philosophical foundation is open to being questioned and
reflected upon like any approach to understanding life. Obviously, such
reflections are more fundamental than socio-economic interests and
concerns, which are anyway to do with applications of the technology.
Indeed they transcend an ethical debate which is restricted to risk-
benefit assessment, be it in ecology, public health or social rights.

At this conference the fundamental issues will be tackled in several
different ways. On the first day, the discussion will focus on
scientific and social aspects. The introductory lectures will shed light
on benefits, challenges and dangers of DNA thinking. What will our world
and society look like if they are shaped by concepts of molecular
genetics? What qualities of science and society will be deepened and
enlarged by gene thinking? Which qualities would be lost and how can
they ultimately be salvaged, reintroduced or formed anew?

The second day will cover molecular genetics in biology. The rate of
discovery of new genes and their functional properties and interactions
is breathtaking. Our insight into molecular function is highly advanced
and will develop in still greater depth. However, when molecular biology
moves from a descriptive to an explanatory science, obstacles are
encountered. Molecular function does not readily explain pattern
formation during development or processes of consciousness etc. The fate
of a transgenic organism in the environment cannot be deduced from the
results of DNA manipulation or calculated in advance. Thus, the theories
based on the molecular approach fail to explain life-processes. Are
there essential aspects missing?

The third day is dedicated to DNA and the human being. Faced with the
serious issues about the social impacts of the new technology, public,
scientific and medical awareness is severely challenged. Diagnosis and
therapy open a whole field of new questions which require us to rethink
and reformulate concepts such as human individuality, health and
disease.

Participants in the evening round-table discussions will share their
attitudes towards genetic engineering and aspects of their personal
biographies that led them to take their particular position. The
intention is to show that besides the ability to grasp certain
'objective' facts about this technology, the contextual environment,
i.e. the 'personal subjective approach' is of equal importance for
judgement formation.

The aim of the conference is to mobilize people - both scientists and
non-scientists - who would like to raise the dialogue above mere utility
and economic needs. The challenge is to create a pluralistic exchange of
concepts, hypotheses and images about what it is to be human and the
nature of the world. The discussion will focus on the presuppositions,
as well as the consequences and perspectives of knowledge. We hope that
through this interaction, consciousness will be raised and a broader
foundation will be provided for individual ethical judgement forming.

http://www.gene.ch/gentech/1997/8.96-5.97/msg00225.html