Mail


Sleeping on it

It seems that central to the hypothesis of Chiara Cirelli and Giulio Tononi (specifically, that we need sleep to prune synapses)1 that all synapses grow during the day, whether they're stimulated by specific activities or not. I didn't see anything in the article that specifically supports the idea that all synapses increase in strength during the day—has any research been done on it?

Regardless, it seems that some kind of combination of Cirelli and Tononi's theory and the predominant theory that we need sleep to replay and consolidate memories may make sense. Specifically, sleep may be needed to prune synapses, and a replay of memories may be needed to prevent new connections from being pruned at that time.

Without replay, new synapses may be pruned, preventing retention. Without synapse pruning, replay wouldn't be needed to maintain new connections.

If this is the case, then Cirelli...

Michael Thomas
Lighthouse Worldwide Solutions
Fremont, Calif.
skyhighsmile@yahoo.com

References

K. Rae Chi, "Disappearing before dawn," The Scientist, 23(4):34–40, April 2009.

Why we don't share

In his column, Steven Wiley is right—it is very hard to come up with common parameters and conditions that allow data from experiments to be easily compared.1 But that doesn't mean that it won't happen. At present we are still locked into the journal publication as the medium of communicating our experiments and hypotheses to other scientists. But even journal papers have changed dramatically, with most papers having supplementary datasets or figures that are often as, or more, important than what is in the paper itself.

At some point there is no real value in publishing a journal paper at all—instead the experimental data are uploaded to the lab web page and the conclusions and hypotheses become part of the scientist's blog. If the data are significant, the site will rise in the search engines and be picked up by science news collators like The Scientist. There are already collaborations working just this way.


In my experience researchers simply don't want to share data.

At that point you have data sharing. If other researchers want to build on those results or test those hypotheses, they will have to use the conditions used in the original dataset and rather than imposing a "standard," it will become the standard, for as long as it is appropriate. That is the way Internet standards work, and I see no reason for science to be different.

The really big question will be how will those who employ scientists measure the value of a scientist's work when, instead of publishing papers, he/she hosts a web page that gets thousands of hits?

Bart Janssen
Auckland, NEW ZEALAND
claire.seymour@gmail.com

In my experience researchers simply don't want to share data. For example, recently two different groups published results of concordance analyses. In each instance, the raw data consisted of 3 x 3 or 5 x 5 tables of integers that the authors had to create to calculate the published kappa values. I sent emails to the corresponding authors asking for the raw data table—at least two months later I have not received a reply to my original email.

If researchers will not respond to a request, that seems to me to require little effort on their part, I doubt that they would agree to deposit data in a database.

Jerry Gardner
Science for Organizations, Inc.
Mill Valley, Calif.
gardnerj@verizon.net

References

1. S. Wiley, "Why don't we share data?" The Scientist, 23(4):33, April 2009.

Hashing out a hypothesis

The cancer stem cell hypothesis1 was and remains so widely accepted because it appears to make so much sense. Personally, I was very excited by the theory when it first broke, as it provides such a tremendous starting point for tackling cancer at its roots and also helps to explain remission and relapse of disease. It has come with great disappointment that the theory appears flawed for many different cancer types. What a shame that great hypotheses do not always hold true!

Brent Neumann
University of Queensland
Brisbane, AUSTRALIA
b.neumann@uq.edu.au

The cancer stem cell theory may hold for certain neoplasms, possibly those derived from the tissues for which a stem cell is known and recognizable. This is the case for hematopoietic stem cells; I can imagine one of them mutating towards a leukemic differentiation while retaining other stem cell properties. On the other hand, do we know for sure stem cells even exist in the mammary gland or among the melanocytes? Most probably these (if ever properly defined and detected) could—in theory—become the breast cancer or melanoma stem cells, respectively. And, if such were never found, the "old" cancerogenesis theory would hold for these specific tumors.

Jacek Witkowski
Medical University of Gdansk
Gdansk, POLAND
jawit@amg.gda.pl

References

1. E. Dolgin, "Cancer's culprit," The Scientist, 23(4):59–60, April 2009.

Apply outside the box

Unfortunately, the challenges described in the article,1 which details nongovernment-funding sources, are typical for younger researchers. Smaller, often private, grants could serve as a crucial stepping stone to help junior staff build experience, seniority, and a robust research track record. Let's encourage younger researchers to "apply outside the box," and provide them with the right support to do so.

Junior faculty should not have to face the complexities of grant hunting alone, and the valuable guidance of research administrators and senior faculty cannot be underestimated. Involving postdocs in writing grant proposals rather than research papers, or actively involving junior faculty in both the proposal development as well as other important elements of the funding process, are just two examples of how relevant experience and insights can be fostered.

Moreover, when pursuing funding opportunities themselves, junior faculty should seek advice of senior faculty before they spend considerable amounts of time to prepare their proposal. Another proven method is to study historic data—what was funded in the past often proves to be a good indicator of future outcomes.

Josine Stallinga
Elsevier Science & Technology
Amsterdam, THE NETHERLANDS
j.stallinga@elsevier.com

References

1. C. Milano, "Finding new money," The Scientist, 23(4):70–72, April 2009.

Pharma gets flak

Re: "Merck published fake journal,"1 with the volume and complexity of information available through electronic media these days, patients, doctors, and researchers more than ever need comprehensive, unbiased sources of information. I have great respect for the pharmaceutical and publishing industries in general, but this is a disgraceful display of deliberate deception by both parties. We desperately need unbiased reporting on new medicines, and new thinking on disease. Does it really have to come to this?

Sheryl Torr-Brown
Mystic, Conn.
srtorr@gmail.com

We do government-aided, i.e. publicly funded, research, remain accountable to a peer review system in getting grants and executing them and also in publishing the work and its follow through. The life of an author is accountable at every stage.

The publishing world is commercial, aimed at garnering more power in their businesses, in collaboration with an opaque editorial process that is beyond question aided by a peer review system that cannot be challenged.

It is time we recognize the publishing world for what it is...we can discern parochial, self-serving operations at every stage and yet we are reluctant to do something about bringing in more transparency. What Merck and Elsevier have done is admittedly at the far end of the spectrum. But, indeed, the entire publishing world shares the sin to a variable degree simply due to lack of transparency and accountability.

Vetury Sitaramam
University of Pune (retired)
Pune, India
sitaramamv@gmail.com

References

1. B. Grant, "Merck published fake journal," The Scientist NewsBlog, April 30, 2009. http://www.the-scientist.com/blog/display/55671/

Erratum:

The original version of "Recombinant DNA Fermenter, circa 1977," misrepresented the expertise of Herbert Boyer and Stanley Norman Cohen—Cohen had more expertise with bacterial plasmids, while Boyer was more familiar with restriction enzymes. The Scientist apologizes for the error.


Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?