HuMod.com-Preparing vaccines and therapeutics that target a future mutant strain of H5N1 influenza virus sounds like science fiction, but it may be possible, according to a team of scientists at the National Institute of Allergy and Infectious Diseases (NIAID), a component of the National Institutes of Health (NIH), and a collaborator at Emory University School of Medicine. Success hinges on anticipating and predicting the crucial mutations that would help the virus spread easily from person to person.
Led by Gary Nabel, M.D., Ph.D., director of the NIAID's Dale and Betty Bumpers Vaccine Research Center (VRC), the team is reporting in the August 10, 2007 issue of the journal "Science" that they have developed a strategy to generate vaccines and therapeutic antibodies that could target predicted H5N1 mutants before these viruses evolve naturally. This advance was made possible by creating mutations in the region of the H5N1 hemagglutinin (HA) protein that directs the virus to bird or human cells and eliciting antibodies to it.
Articles like these lead me to believe that Super AI and Non-Biological Human Intelligence will excel at their predictive abilities. As they gain knowledge, they'll form more and more accurate models of the future. If we were to extrapolate this out to its final conclusion, then the question becomes, what will be their final prediction? Will it be that a stalemate with a future universe is inevitable or will it be a checkmate with the score Universe one, intelligence zero. What then does SAI and NBHI do then, reboot perhaps?
Right now, some of us have already concluded that biological humans will be, if not extinct, then resigned to being a pet of Super AI. If we can reasonably predict this then what reasons would SAI have to not predict, likewise, it's own end. If it predicts a particular technological advancement or, even better, many technological advancements and then makes technological predictions of those advancements and so on through hundreds of thousands of iterations, what would they see? And if they can continue making highly accurate predictions then what would be the incentive to actually carry out physical experiments? Why not just continue until the accuracy drops below an always improving threshold. What if SAI gets so good at running predictive simulations that it decides that it doesn't need to do anything else. It decides instead, to just lay quietly, diligently, expertly, and feverously, computing and using its results as feedback for further simulations thereby reaching ever silently into the future...until?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment