Welcome to 2014! I hope you have all had time to settle in a little, and are ready to begin the year afresh. As always, in the event of an earthquake take cover under your desks and then exit through the doors at the front and back in an orderly fashion after the shaking ends. If there is a fire, leave immediately and do not panic. If both occur simultaneously, hope.
Our topic is, as always, the Institute for Creation Research: as such 2013 is not yet over for us. Today the subject is December’s infamous ‘duons’, and the Creation Science Update by Jeffrey Tomkins is called “Duons: Parallel Gene Code Defies Evolution.”
I can hardly claim to have been looking for long enough to call it a trend, but it does seem to me that scientific press releases – particularly in genetics and related fields – are in a bad way of late. While previously it was possible to claim that you had single-handedly discovered that there were classes of functional genetic elements beyond the protein-coding gene, and with that have also disproved junk DNA, the ENCODE PR seems to have forced some to go much further.* If the implications of the duon press release were to be believed, its authors belong with the great molecular biologists of the 20th century having found a whole new genetic code buried in the genome. But that’s definitely an oversell of their paper, which is called “Exonic Transcription Factor Binding Directs Codon Choice and Affects Protein Evolution” (pdf) and was published in Science in mid-December.
We have known for decades now that protein-coding genes code for protein via sets of three nucleotides which translate into amino acids. The discovery that other nucleotide sequences don’t code for proteins but instead are “regulatory elements” which affect gene expression and other cell processes indirectly is similarly ancient. And since at least the 90s it has been known that parts of protein-coding genes can also act simultaneously as regulatory elements, in the form of influencing transcription factor (TF) binding sites. What’s new that this paper reports is that, in the 81 human cell types they studied, 86.9% of genes showed evidence of containing these elements (about a third in any given cell type); that ~14% “of all human coding bases contact a TF in at least one cell type”; and that around 17% of point mutations affecting these bases influence the TF binding. Oh, and they tried to secure for themselves immortality by coining the term “duons” to describe this “dual encoding.”**
As I have already alluded to, the reaction to this paper – and particularly its press release – has been less than positive among the scientific blogosphere. The overarching theme has been that it is overhyped: Josh Witten wrote “This is not an unreasonable hypothesis, but it is hardly shocking, hardly requires a new term, and is hardly a controversy.” Emily Willingham, in her analysis, noted where the paper says:
Apart from arginine [...] for all amino acids encoded by two or more codons the codon that is preferentially used genome-wide is also preferentially occupied by TFs.
That’s not a “second code,” even though the news release describes it that way. It’s a different (but already recognized) use of an existing code, now identified as occurring at a greater than previously recognized frequency in areas that use the same code for proteins.
The article is also a likely frontrunner in Michael Eisen’s Pressie Awards for overhyped press releases.
So where do the creationists come in? As one commenter on the press release noted,
Still a week after publication and the creationists and anti-science crowd have not read the actual Science article.
The even more bastardised version of the story being used by the creationists seems to run along the lines that “the genome is now doubly complex, and therefore doubly un-evolvable.” The cdesign proponentist David Klinghoffer put it:
Now as a thought experiment, imagine that instead of two languages operating simultaneously, there are three. Or four. Five? Why not? After all, says Dr. Stamatoyannopoulos, for 40 years scientists missed that there were two. At what number do you stop in your tracks and say: This required planning.
The ICR, as you should well know, comes to these stories a couple of weeks after everyone else. I had dared hope, when pondering what Tomkins might say, that he would take a different tack. Maybe he would take the story apart as an example of how shoddy secular science can be. He wouldn’t be a million miles from a bona fide point if he did, but that would also involve repudiating his previous regurgitation of PR hyped research such as ENCODE.
I shouldn’t have though: Tomkins buys the story wholesale, including its creationist spin. He opens his article:
Researchers have just characterized a new, previously hidden genetic code embedded within the same sections of genes that code for proteins—utterly defying all naturalistic explanations for its existence.
Showing how little background reading he has apparently done to check the claims made in the press release, he says:
Before this study, scientists were aware that the protein-coding regions of genes had mysterious signals other than codons that told the cell machinery how to regulate and process the RNA transcripts (copies of genes) prior to making the protein. Researchers originally thought that these regulatory codes and the protein template codes containing the codons operated independently of each other.
The point of a press release is to sell your work as much as possible, and implying that what is really past work by others is your own has apparently become an integral part of the package. Tomkins doesn’t mind, as such inaccuracy is if anything a benefit for the purposes of his own articles: it lets him present the story as a shocking new discovery that challenges everything we previously knew.
His article was written in 2013, meaning that when he says,
Scientists just last year reported that transcription factors clamped onto some exons inside genes but did not understand this dual code system until now.
…he is talking about a paper published in September of 2012. Despite claims of the discovery of a new “code” the newer paper here seems to only be an incremental step beyond the older one.
Having explained as much of the research as he sees fit, Tomkins moves on to the “isn’t it complicated?” part that is used to explain how all this could possibly “defy evolution” as he claims. He writes:
The human mind struggles to comprehend the overall complexity of the genetic code—especially the emerging evidence showing that some genes have sections that can be read both forward and backward.Some genes overlap parts of other genes in the genome, and now it has been revealed that many genes have areas that contain dual codes within the very same sequence.
As an aside, the “forward and backward” thing comes from a 2013 DpSU that I never got around to writing up called “Bewildering Pseudogene Functions Both Forwards and Backwards.” In short, we have a typical protein coding gene that is regulated, in part, by little sections of RNA which interfere with its translation. But this gene has a psuedogene – a defective copy – which is partially transcribed and, because it has a very similar sequence to its parent gene, acts to soak up these RNAs and thus itself regulates the original. And this psuedogene can also, in part, be transcribed backwards, producing yet another RNA with a sequence that can influence the gene expression of the first. According to Tomkins this couldn’t have evolved.
Back to this article though, Tomkins concludes with some more meaningless gushing:
Even the most advanced computer programmers can’t come close to matching the genetic code’s incredible information density and bewildering complexity. An all-powerful Creator appears to be the only explanation for this astounding amount of seemingly infinite bioengineering in the genome.
What does he mean by “information density” exactly? The concept has come up here before – while it’s true that a person could use DNA to record information we can use even smaller objects as well, making DNA far from the most “dense” in terms of physical volume. More likely though Tomkins is referring to the natural code itself, in which case he needs to show his work.
Beneath the hype there really is a story in this paper: the codon bias – in which an organism is more likely to use a specific codon when multiple code for the same amino acid – seems to be strongly related to the choice of codon that would influence transcription factors. But there is a long, long way to go between this and saying that God is the only possible explanation and that evolution must be bunk. And as usual, this part is missing, replaced by nonsense.
Welcome to the new year! It’s just like the old year.
*Complicating this explanation somewhat is the fact that this study appears to actually be part of the ENCODE project. Make of that what you will.
**There seems to be some confusion about whether they made it up or were using somebody else’s word. Google Scholar doesn’t show me any use of the term in this context prior to the December paper, so it seems to be all their doing.