viernes, 7 de agosto de 2009

La práctica hace al maestro pero, ¿sólo después de 10,000 hrs de práctica?

El periodista y escritor Malcolm Gladwell insiste mucho en este punto tanto en sus libros, como conferencias, y artículos periodísticos. Según parece, investigaciones psicológicas indican que para lograr una verdadera maestría en cualquier tipo de actividad, se necesita que una persona o grupo realice 10,000 horas de trabajo enfocado. Esto es cierto para investigadores (como Wile, quien resolvió el teorema de Fermat, después de 7 años de esfuerzo), como músicos (Mozart, quien a pesar de ser un niño prodigio, compuso una verdadera obra maestra hasta los 22 años, o los Beatles, con sus estancias en Hamburgo, que los obligaba a tocar 8 horas todos los días por varios años), como grandes maestros del ajedrez, que requieren de mucho tiempo de entrenamiento (la que la excepción fue Bob Fisher, quien sólo necesitó 9,000 horas para ser campeón mundial), etc.

En su libro más reciente, "Outliers", que fue traducido como "Fueras de serie", en lugar de "Atípicos", se concentra en estos resultados pero se extiende hacia por qué unas personas tienen éxito y otras no. Como él mismo señala: "La gente no surge de la nada. Le debemos mucho al parentage y al patronage. La gente que se para frente a reyes parecen haberlo hecho todo por sí mismos. Pero de hecho han sido, invariablemente, beneficiarios de ventajas escondidas y oportunidades extraordinarias y legados culturales que les permiten aprender y trabajar duro para darle sentido al lundo en formas que otros no pueden. Hace una gran diferencia dónde y y cuándo crecieron. La cultura a la que pertencecemos y los legados que nos han pasado nuestros antepasados conforman el patrón de nuestros logros de una manera que no podemos comenzar a imaginar. En otras paabras, no es suficiente preguntar cómo son aquellos que tienen éxito. Es solamente preguntando de donde vienen de donde podemos descifrar la lógica detras de los que tienen éxito y quienes no lo tienen".
En este sentido, Gladwell señala la importancia que tiene analizar cualquier cuestión sin sacarla de contexto pues es éste el que determina muchas veces lo que pasa a ser historia. Así pues, analiza toda una serie de comunidades y personajes que se consideran atípicos para hacernos reflexionar que las cosas sin contexto no funcionan. Habla, por ejemplo, de un médico que se entera por un colega que la gente que el trata no sufre de enfermedades cardiacas. Siendo esta enfermedad tan común, este médico decide averiguar por qué un pueblo completo en Estados Unidos, cuyos miembros son provenientes de una región de Italia, Roseto, sólo parecen morir de viejos. Encuentra que la razón no está ni en la región de Italia de donde provienen, ni de la localización en donde viven, es decir en el ambiente, ni en la genética, ni en la alimentación, ni con el trabajo que realizan. Más bien parece estar relacionada con su forma de vida, de su manera de tratarse unos a otros, el respeto por los viejos, el detenerse a platicar en la calle, es decir, con la falta de estrés en su vida.
Después analiza cómo la forma en que se separa por edades a los que ingresan a los equipos de hockey en Canadá, de tal manera que un niño que nació el 2 de enero juega en el mismo equipo que otro que nació el 31 de diembre, del mismo año, lo que favorece a los más grandes y maduros y los escoge para tener mejores grupos e instructores. Entre paréntesis, eso pasa también en el ingreso a la escuela, en donde los más favorecidos son los que por su fecha de nacimiento son los más maduros, por lo que se convierten en líderes y son entonces "naturalmente" escogidos para becas y mejores maestros.
En otra parte del libro, Gladwell nos cuenta sla "verdadera " historia de varios personajes como Bill Joy, unix, y Bill Gates, Microsoft, sin desmerecer nada, pero en la que describe toda la serie de factores favorables que estos personajes supieron aprovechar para estar donde están.
Y hablando de oportunidades, describe como un personaje como Langan, siendo una de las personas más inteligentes que se conocen, no pudo consevar una beca y sus maestros de matemáticas nunca supieron de su amplísima habilidad matemática, y ahora vive como "hippie" escribiendo libros de matemáticas y ciencia que nadie lee. Mientras, por otro lado, el físico teórico Robert Oppenheimer, siendo un estudiante muy brillante, después de tratar de envenenar a su supervisor, Blackett (premio nóbel de física) por ponerlo a hacer experimentos en lugar de dejarlo hacer teorías, sólo sale con una condena de librtad condicional con la obligación de ver a un psiquiatra. Tiempo después, convence al general Leslie Groves de hacerlo jefe del proyecto Manhattan, a pesar de ser teórico, no saber nada de equipos, nunca haber participado en un puesto adminsitrativo, y tener relaciones políticas dudosas.
En fin, algunos críticos han señalado que el libro es autobiográfico, aunque él mismo no se considera un "atípico", aunque sí bastante exitoso, ya que escribe parte de sus antecedentes, el libro analiza otros porblemas interesantes y es una lectura muy disfrutable que hace pensar sabre situaciones propias o cercanas.

Leer toda la nota...

domingo, 10 de mayo de 2009

EINSTEIN Y LA REVOLUCIÓN EN CIENCIAS DE LA TIERRA

Por muy amplia que sea la cultura científica
de un investigador, los campos de la Ciencia
no son ilimitados

Debía de correr el año 1964. Tras completar con no pocos apuros el Curso Selectivo, yo acababa de aterrizar en Geológicas de la Complutense

Así que me leí de pe a pa las ideas de Hapgood. Que, en resumen, venían a decir que los casquetes de hielo, al no estar perfectamente centrados respecto al eje de rotación, producían una fuerza centrífuga que terminaba por desplazar toda la corteza terrestre (así, en una pieza) respecto al interior del planeta.

------------------------------------------------------------------------------------------------
Debía de correr el año 1964. Tras completar con no pocos apuros el Curso Selectivo, por entonces la puerta obligatoria de acceso a las carreras de Ciencias, yo acababa de aterrizar en Geológicas de la Complutense. Pero aunque el Selectivo estaba diseñado para orientar vocaciones, recuerdo que la mía era una especie de nebulosa, en la que los rechazos habían pesado más que cualquier atracción. En suma, era el prototipo de universitario despistado.

Sin embargo, entusiasmo no me faltaba. Entre los proyectos que me forjé estaba ni más ni menos que el de leerme todos los libros de la biblioteca de la Facultad. Ésta ocupaba un pequeño local con unos 30 asientos, gestionado con autoridad por un paternal conserje que incluso nos orientaba sobre las lecturas más eficaces para mejor lidiar con las manías de los profesores. Pero sobre todo, la biblioteca era una selva llena de misterios científicos. El libre acceso era un sistema entonces desconocido en la Universidad española, de forma que teníamos que conformarnos con mirar los lomos de los libros a través de los cristales de las vitrinas.
No puedo decir por qué comencé mi titánica tarea por el Hapgood. Su lomo, de un verde descolorido, no me parece ahora especialmente atractivo. Su título, en cambio (“La corteza terrestre se desplaza”, 1958), tenía gancho. Me apresuro a aclarar que por aquélla época yo no tenía la menor idea de quién había sido Alfred Wegener, y muchísimo menos de que sus ideas básicas estaban a punto de triunfar en lo que sería el momento cumbre de la historia de la Geología. Pedí el libro con la curiosidad de quien llega a un país desconocido.

En seguida descubrí que Charles H. Hapgood, un oscuro profesor de una escuela de Magisterio del Medio Oeste americano, tenía un as en la manga: el libro estaba prologado ni más ni menos que por Albert Einstein. También se reproducían, además, dos cartas del físico alemán, que respondían a otras del autor pidiéndole orientación (e, indirectamente, apoyo) para sus ideas. Como teórico aspirante a científico, el nombre del sabio de Ulm me deslumbró. Pensé: si Einstein está de acuerdo, esto no puede ser erróneo. No se me ocurrió (no podía ocurrírseme entonces) que por muy amplia que sea la cultura científica de un investigador, los campos de la Ciencia no son ilimitados, sino que están parcelados por espinosas barreras cuya trasposición nos deja inermes, faltos de las claves que nos permiten caminar seguros por nuestra propia especialidad.

Así que me leí de pe a pa las ideas de Hapgood. Que, en resumen, venían a decir que los casquetes de hielo, al no estar perfectamente centrados respecto al eje de rotación, producían una fuerza centrífuga que terminaba por desplazar toda la corteza terrestre (así, en una pieza) respecto al interior del planeta. La principal virtud de esta teoría era la explicación de parte de las anomalías paleoclimáticas;
su más gigantesco escollo, la casi evidente impotencia de la fuerza aducida para explicar la magnitud del efecto propuesto. Vista con la perspectiva actual, esta obra se puede inscribir en los movimientos inquietos que se producen poco antes de un cambio de paradigma: nuevos datos empiezan a chirriar, hasta que su acumulación desemboca en una revolución científica. Pero para que ésta se produzca es necesario que algunos tipos listos sepan buscar una nueva armonía entre los crecientes chirridos, y a Charles Hapgood le faltaban muchos instrumentos.

¿Cuál había sido la actitud de Einstein ante las ideas de éste? Excepcionalmente buena, según declaraba en el preámbulo, donde decía haberse entusiasmado ante la primera carta del geólogo. Calificaba la teoría de original e importante, y agradecía que estuviese expuesta con sencillez. Continuaba con una glosa de sus puntos esenciales, que partían de las evidentes anomalías paleoclimáticas y
se centraban en la propuesta de los desplazamientos centrífugos de la corteza. Esta parte concluía con una frase contundente: “Creo que esta idea sorprendente, y aun fascinante, merece la seria atención de quienquiera que se interese en la teoría del desplazamiento de la Tierra”.

Pero quedaba una reserva. En el último párrafo, Einstein incluía “una observación que se me ha ocurrido mientras escribía estas líneas”, y que “podría comprobarse”: si toda la corteza terrestre se desplazaba movida por la simple asimetría de los casquetes de hielo respecto al eje de rotación, la distribución de las rocas de la corteza debería ser totalmente simétrica respecto al mismo, ya que en caso contrario produciría fuerzas centrífugas mucho mayores que el hielo. Con esta matización, el físico destruía prácticamente la teoría del geólogo, ya que basta con examinar un mapamundi para ver que tal simetría no existe. Por tanto, el apoyo de Einstein a la nueva idea era hasta cierto punto contradictorio, y ahora me hace preguntarme sobre las causas de que el sabio alemán no siguiese su prudente línea de conducta habitual (“sólo pocas veces las ideas que recibo tienen valor científico”). Es arriesgado caminar sobre terrenos desconocidos, pero probablemente en esta etapa final de su vida Einstein estaba fatigado de su largo y vano esfuerzo por encontrar una teoría que unificase la relatividad con las ideas cuánticas y era más propenso a distracciones colaterales.

Al cabo de 41 años, releo el Hapgood (que ya ha pasado al depósito de ejemplares antiguos de la moderna biblioteca de mi Facultad) con evidente nostalgia (mi proyecto de lectura total nunca pasó de este libro) y también con una simpatía algo triste por la suerte del autor. Si hubiese esperado tan solo cinco o seis años, hubiese tenido acceso a los datos que desembocaron en la nueva Geología Global.

Entonces podría haber explicado, a partir de un proceso físico también relativamente simple (la convección térmica del interior terrestre) no sólo las anomalías paleoclimáticas, sino el conjunto de la geología del planeta: desde la historia de los continentes y los océanos hasta la evolución de la vida, pasando por la distribución de los recursos naturales. Algo no tan espectacular como las teorías relativistas, pero igualmente revolucionario.

A pesar de que esta explosión de Ciencia se estaba cociendo ya en mis tiempos de estudiante, yo no pude conocerla hasta años después de acabar mi Licenciatura: mis catedráticos no frecuentaban la biblioteca, y siguieron apegados al dogma de una Tierra inmóvil hasta más allá de lo razonable. Nunca les he perdonado que me hurtasen el momento más candente de la historia de las Ciencias de la Tierra, cuando en las facultades de Geología de todos los países avanzados se celebraban, sobre estas nuevas ideas, acalorados debates que alcanzaban tonos asamblearios de revolución política. ¿Qué habría pensado Albert Einstein de haber vivido esta época? No tengo ninguna duda de que se habría entusiasmado con la Nueva Geología. Porque, como dijo en una ocasión, si él llegó a realizar sus descubrimientos fue porque se atrevió a desafiar un axioma.


Por: Francisco Anguita
Profesor Titular de Paleontología de la Facultad de Ciencias Geológicas
de la Universidad Complutense de Madrid
"14 miradas sobre ALBERT EINSTEIN"

Leer toda la nota...

jueves, 2 de abril de 2009

Taller Universitario de Investigación y desarrollo espacial (TUIDE)

Recientemente, hojeaba yo un libro que había comprado el hijo de Ramón Zúñiga que se llama "The World's Worsts", de Les Krantz y Sue Sveum (Harper Collins, 2005), y me sltó una entrada en el capítulo "Bad Ideas. The Most Absurd Brainstorms of the 20th Century", con el nombre de "El Programa Espacial de Uganda". Yo francamente espero que este asunto de la Agencia Espacial Mexicana no caiga en pretenciones de este tipo sino que tome un rumbo más adecuado a nuestra capacidades reales.

En este mismo sentido, me acuerdo de una película que vi hace algún tiempo, que narra el episodio de la misión Apolo para poner una nave tripulada en la Luna. La película empieza con una llamada telefónica del presidente de EU al alcalde, creo, de un pueblito pequeño en Australia, en donde operaba un radiotelescopio astronómico del cual, en el pueblo, la gente sólo sabía que existía y que unos cuantos científicos medio locos (creo que eran 3) trabajaban en él. El motivo de la llamada era la solicitud de apoyo para transmitir a la Tierra el alunizaje del Apolo por la televisión. Por supuesto, el pueblo australiano pasó de inmediato a ser noticia de primera plana y quiero pensar que de esa manera se inición la Agencia Espacial Australiana. Yo creo que que en México deberíamos pensar en una agencia encaminada en esta última dirección, es decir, una en la que se de mayor énfasis en el desarrollo de infraestructura de apoyo a empresas ya en funcionamiento a nivel mundial, mientras se preparan recursos humanos y técnicos para enfrentar los retos futuros, que ha crear cosas de la nada sin contar con un conocimiento y entrenamiento previos.
Por ello, con el fin de influenciar a los encargados de llevar a cabo la misión, les transmito la convocatoria para participar en el Taller Universitario de Desarrollo Espacial que organiza la UNAM.

Primer
Taller Universitario de Investigación y Desarrollo Espacial
TUIDE


LA UNAM EN EL ESPACIO
PRESENTE, PASADO Y FUTURO


Las investigaciones y la tecnología espaciales en el mundo contemporáneo han cobrado una relevancia sin precedentes. En este contexto y con el fin de incorporar un esfuerzo nacional en los temas espaciales, el Senado de la República recientemente aprobó la Ley que crea la Agencia Espacial Mexicana; en ella se plantea la necesidad de iniciar una discusión que lleve a la creación de un Plan Nacional Espacial. Atenta a esta situación y a fin de contribuir con proyectos y propuestas que ayuden a enriquecer el Plan Nacional, la Universidad Nacional Autónoma de México, a través de la Secretaría General y la Coordinación de la Investigación Científica

CONVOCA
A la comunidad universitaria a participar con propuestas concretas en el primer Taller sobre Investigación y Desarrollo Espacial (TUIDE), bajo la siguiente temática:

Comunicaciones
Control
Desarrollo de Materiales en el Espacio
Experimentos Biológicos en el Espacio
Industria Aeroespacial
Instrumentación Espacial
Lanzadores y Propulsores de Cohetes
Microelectrónica
Percepción Remota Aeroespacial
Plataformas Satelitales
Robótica Aplicada al Espacio
Telemedicina
Otros Temas Relevantes

Las propuestas deberán plantearse por medio de resúmenes que deberán ser enviados por vía electrónica a la siguiente dirección:

http://www.astroscu.unam.mx/congresos/TUIDE/

La fecha límite para enviar resúmenes es el 4 de mayo de 2009.

Los resúmenes deberán estar enfocados hacia proyectos que se estén desarrollando o propuestas de proyectos realizables a futuro, haciendo énfasis en nichos de oportunidad .

El Taller se llevará a cabo los días 19 y 20 de mayo de 2009, en la Planta Baja de la Torre de Ingeniería, Instituto de Ingeniería, Ciudad Universitaria D.F.

El sitio web señalado arriba contiene toda la información relevante del evento. En caso de requerir mayores detalles comunicarse a los teléfonos: 56224113 ó 56161480.

Atentamente,

El Comité Organizador

Blanca Mendoza (Instituto de Geofísica)
Miguel Ángel Alatorre (Instituto de Ciencias del Mar y Limnología)
Alejandro Farah (Instituto de Astronomía)
Armando Peralta (Instituto de Geografía)
Saúl Santillán (Facultad de Ingeniería)
Gonzalo Guerrero (Facultad de Ingeniería)
José Franco (Instituto de Astronomía)
José Valdés (Instituto de Geofísica)

Leer toda la nota...

martes, 10 de marzo de 2009

Geoengineering, la solución?

En agosto 2008 se publicó en Physics Today un artículo sobre geoengineering (ver abajo). Este artículo resultó en dos comentarios interesantes, que turnan alrededor de los posibles efectos secundarios indeseados y el aspecto ético que implica el posible "descuido" en la producción de CO2, ya que se tiene una remediación futura a la mano. Estos comentarios se publicaron en febrero 2009 en Physics Today y estan anexos al articulo abajo.


Will desperate climates call for desperate geoengineering measures?

Earth scientists ponder the wisdom of large-scale efforts to counter global warming.

August 2008, page 26

The public is finally paying attention to anthropogenic climate change, but it has not yet kicked its carbon habit. Global emissions of carbon dioxide continue to rise, with output in recent years exceeding the worst-case projections of just a few years ago. At the same time, Earth is showing signs of accelerated warming, such as Arctic sea-ice melting and a shrinking Greenland ice sheet.

Concerned that Earth's climate will change to an unacceptable degree or at an unacceptable rate before economies can shift significantly away from carbon-based energy sources, some scientists have begun casting their eyes in a previously shunned direction: geoengineering, or intentional and large-scale intervention to prevent or slow changes in the climate system.

Geoengineering sometimes refers strictly to techniques for increasing Earth's albedo, or reflectivity, to lower its temperature and compensate for greenhouse warming. More broadly, the term can include efforts to accelerate some of the natural processes for removal of CO2 from the atmosphere. Many such ideas have been around for decades. In the past few years, however, the debate over their potential deployment has intensified.1

Extreme measures?

The idea of deliberately tampering with Earth's climate system raises the specter of unintended consequences, especially because the interventions would be imposed on a climate system already significantly perturbed by the unintentional consequences of human activity. Many scientists are averse to opening that Pandora's box, preferring to mitigate climate through emissions reductions and worrying that those reductions might be undermined by a premature faith in a technical fix.

Nevertheless, some scientists argue that the world needs an emergency backup plan. They advocate doing careful and thorough research to examine the efficacy, costs, side effects, duration, and reversibility of any potential climate-change intervention.

One such advocate is National Academy of Sciences president Ralph Cicerone.2 He favors separating research on geoengineering from actual implementation. Scientists should not proceed with an intervention, he asserts, until the proposed action is subjected to expert, international peer review, with opportunity for significant public involvement. He sees the need for a qualified agency to oversee the design, implementation, and monitoring of experiments, and he points to the actions of biologists in the 1970s who created ethical guidelines for genetic research.

David Keith of the University of Calgary points out that if serious scientists don't do the work, the field will be dominated by enthusiasts at the fringe, and that geoengineering schemes may be commercialized in a way that ends up being counterproductive.

Other scientists argue that doing any research on geoengineering schemes is dangerous. As Raymond Pierrehumbert of the University of Chicago puts it, "It is very unfortunate that the genie has been let out of the bottle just as the world has begun to awaken to the seriousness of climate change and the need to take real action." There's a real risk, he says, that unwarranted faith in the technology will "cut off at the knees actions that might start to make serious reductions in greenhouse emissions."

Even with the best science, no one can fully anticipate all the unintended consequences of a geoengineering measure. Can one conduct an experiment large enough to give a realistic assessment of a countermeasure without making the experiment so large that it becomes a significant intervention?

Another set of questions concerns geopolitics. If the international community did convene a body to govern the implementation of a certain geoengineering scheme, how would that body decide when to give the go-ahead? The decision would be complicated by the unequal global distribution of climate impacts, such as drought or increased monsoons, and by the unequal distribution of the possible consequences, good and bad. It's possible that a single nation, suffering worse effects than others, might attempt climate modification on its own.

Figure 1 - Figure courtesy of Kurt House, Harvard University
The proposed types of climate intervention vary widely, with the risks and benefits being quite specific to a particular action. During a public panel held in May at the University of California, Santa Barbara, Kurt House of Harvard University rated the commonly discussed actions by cost and risk, as sketched in the figure on page 26. Reducing CO2 emissions has the lowest risk but the widest range of costs.

Albedo modifications

At the high end of the risk scale and low end of the cost scale fall techniques to reflect solar radiation with small particles in the stratosphere. Volcanic eruptions have given us some evidence for the impact of particles in the stratosphere. Eruptions large enough to loft soot into the stratosphere lowered global temperatures for a year or two afterward. Mount Pinatubo in 1991 hurled roughly 17 megatons of sulfur dioxide to heights of about 18 km. In the stratosphere, SO2 gas forms highly reflective sulfate particles with residence times of a few years.

The cooling impact of volcanoes suggests that the artificial injection of aerosols might lower Earth's temperature. Paul Crutzen of the Max Planck Institute for Chemistry sparked the recent geoengineering debate in 2006 when he proposed3 the injection of about two megatons of SO2 particles into the atmosphere annually to compensate for a doubling of CO2. The paper attracted a lot of public attention because of its author: Crutzen won the 1995 Nobel Prize in Chemistry for his work on ozone chemistry in the stratosphere.

Alan Robock of Rutgers University is among those who have pointed to a number of problems with the sulfate idea.4 For one, increased albedo compensates only for higher temperatures; it does not mitigate the growing acidification of Earth's oceans or other consequences of greenhouse gases. For another, albedo enhancement is expected to reduce overall precipitation, especially in the tropics. Regional impacts such as reduced rainfall, soil moisture, and river flow were found in the wake of the Pinatubo eruption,5 and similar results have been seen in simulations of hypothetical geoengineering schemes.6

Mount Pinatubo also contributed to an increased rate of ozone depletion at the poles by providing particulate surfaces on which the ozone-depleting reactions with chlorofluorocarbons (CFCs) can occur. Simone Tilmes at the National Center for Atmospheric Research and colleagues recently simulated the effect of intentional injection of sulfates. They found that geoengineering would prolong—or possibly worsen—the ozone depletion for decades, even with the reductions in CFCs mandated by the 1987 Montreal Protocol.

The ozone hole was an unforeseen consequence of CFC emissions, notes Meinrat Andreae of the Max Planck Institute for Chemistry. He asks, "What makes us think we won't be in for another surprise if we inject so much sulfur into the atmosphere?"

Some view the stratospheric injection of sulfur as a stopgap measure, to shield Earth from higher temperatures while or until global greenhouse emissions can be reduced. If no such progress is made, the greenhouse gases will continue to build in the atmosphere, but the sun shield would mask the temperature rise. If the world for some reason stopped lofting sulfur into the stratosphere, Earth would be abruptly exposed to the far hotter temperatures expected in a world with far higher greenhouse gases.

Another way to reduce the solar heat flux reaching Earth's surface is to put deflectors in space. University of Arizona astronomer Roger Angel envisions a cloud of trillions of 0.6-meter-diameter, thin refractive screens to deflect sunlight.7 To offset a doubling of CO2 over preindustrial levels would require a total mass of 20 million tons to block 1.8% of the solar radiation. Angel suggests a combination of electromagnetic acceleration and ion propulsion to loft a mass that large into space, at a projected cost of a few trillion dollars.

Removing CO2

Other geoengineering ideas concern ways to accelerate removal of CO2 from the atmosphere. The least contentious idea is to capture CO2 as it is emitted from the stack of a coal-fired power plant and to sequester it in reservoirs. (For more on the storage of CO2 see the article by Don DePaolo and Lynn Orr on page 46 of this issue.) A related idea is to pull the CO2 out of the ambient air, but that research is still in its infancy. Growing trees and other biomass also takes CO2 out of the atmosphere, and carbon credits are now given for reforestation projects.

A technique considered to pose higher risk than reforestation is ocean iron fertilization (OIF). The iron helps stimulate greater growth of phytoplankton in nutrient-rich regions of the ocean. The photosynthesis in these microorganisms takes CO2 from the ocean's surface and releases oxygen. When the microorganisms die or are eaten, about 5–30% of their biomass sinks to the ocean's depths. Anthony Michaels of Proteus Environmental Technologies LLC in Los Angeles refers to OIF as "time shifting" since the carbon eventually gets reintroduced to the atmosphere after decades or centuries. Because of its relatively low price tag, OIF has become the focus of several companies interested in carbon removal technologies, presumably with the intent to sell carbon credits or offsets.

Figure 2 - Image provided by the NASA Goddard Earth Sciences Data and Information Services Center
In 12 field studies conducted since 1993, OIF was found to stimulate increased phytoplankton growth (see the figure at left), but those studies were not conducted at sufficiently large spatial scales or adequately long time scales to address OIF's efficacy in storing carbon. How much additional atmospheric CO2 would be removed, and how long would it be sequestered? Might the blooms include harmful algae, or could biochemical processes produce methane or nitrous oxide—more potent greenhouse gases? Michaels is among a group of 16 scientists who have publicly asserted that it would be premature to sell carbon offsets for OIF unless the method is shown to remove CO2, retain it for a quantifiable period of time, and have acceptable and predictable environmental impacts.8

Accelerated weathering

Raindrops contain some CO2, so that the drops constitute a weak carbonic acid. Over geologically long time periods, weathering by rainfall dissolves rocks such as magnesium silicate, with magnesium and bicarbonate ions washing eventually into the oceans. The bicarbonate ions can combine with calcium to form calcium carbonate. When the calcium carbonate sinks to the deep ocean, it is sequestered for more than 1000 years, and the carbon is eventually recycled through Earth's mantle.

The oceans absorb about one-quarter of the CO2 added annually to the atmosphere. The absorption is dependent, among other factors, on the alkalinity of the oceans' surface layer. As atmospheric CO2 has increased, so has the acidity of Earth's oceans.

One of several ideas to accelerate the natural weathering cycle is to increase the oceans' alkalinity. For example, House and others have proposed a scheme to remove hydrochloric acid electrochemically from the oceans and to neutralize it by reacting it with silicate rocks.9 To offset the output of 1 gigaton of carbon per year—one-seventh of today's annual global emissions—House estimates that his weathering scheme would require a seawater flow rate equal to that of 100 large sewage treatment plants and the consumption of about 10 gigatons of basalt rock.

The enormous scale of such geoengineering schemes helps to underscore the importance of getting things right.

Barbara Goss Levi

References

  1. 1. For a review, see D. Keith, Annu. Rev. Energy Environ. 25, 245 (2000).
  2. 2. R. Cicerone, Climatic Change, 77, 221 (2006).
  3. 3. P. Crutzen, Climatic Change 77, 211 (2006).
  4. 4. A. Robock, Bull. Atomic Sci. 64, 14 (2008).
  5. 5. K. E. Trenberth, A. Dai, Geophys. Res. Lett. 34, L15702 (2007).
  6. 6. S. Tilmes, R. Muller, R. Salawitch, SciencExpress 24 April 2008.
  7. 7. R. Angel, Proceed. Nat. Acad. Sci. 103, 17184 (2006).
  8. 8. K. O. Buesseler et al., Science 319, 162 (2008).
  9. 9. H. S. Kheshgi, Energy 20, 915 (1995), K. Z. House, C. H. House, D. P. Schrag, M. J. Aziz, Environ. Sci. Technol. 41, 8464 (2007).



Geoengineering: What, how, and for whom?

February 2009, page 10

I have been thinking about geoengineering for climate modification since I worked on the committee that produced the 1992 National Academies report, Policy Implications of Greenhouse Warming. Over the years, and increasingly now, I have been puzzled by the scientific community’s attitudes toward the issue; those puzzles were raised again by Barbara Goss Levi’s story (PHYSICS TODAY, August 2008, page 26).

It has been customary to discuss geoengineering without offering an explicit definition. I propose the following one: Geoengineering is purposeful action intended to manipulate the environment on a very large—especially global—scale. Geoengineering is, presumably, undertaken to reverse or reduce impacts of human actions.

Decreasing human emissions of carbon dioxide and other greenhouse gases is a good idea for many reasons, including climate modification; yet it is not clear to me why manipulating the CO2 content of the atmosphere is not considered geoengineering. If, using the above definition but narrowing it to the case under consideration, geoengineering includes purposeful manipulation of physical, chemical, and biological variables on the global scale for the purpose of changing global climate, then manipulation of carbon dioxide concentrations fits the definition as well as do, for example, manipulating atmospheric aerosol content to control albedo or manipulating the ocean’s iron content to increase the long-term oceanic storage of carbon.

Exclusion of carbon dioxide manipulation from geoengineering has led to a double standard in considering possible negative consequences. There is legitimate concern about side effects of particulate manipulation and the like, but I have not heard much worrying about manipulating carbon dioxide.

In a highly nonlinear feedback-controlled system like global climate, we would expect complex hysteresis effects: Decreasing a control variable such as greenhouse gas will not necessarily lead the climate back along some path like the one it followed when the control variable was increased. The end state of control-variable manipulation may not at all resemble the original state before the control variable was increased, nor will it necessarily be a state we want to be in. I have heard of no concern about those possibilities, which might be rate dependent, involve transient behavior not to our liking, or lead us through bifurcations into unexpected states.

It seems to me we need to be concerned about possible not-so-benign effects of decreasing carbon dioxide and other greenhouse gases before we grab the control knob and turn it down. I hope someone is at least exercising suitable climate models, if relevant ones exist, to examine possible end states, paths to them, and transient effects of carbon dioxide geoengineering.

Robert A. Frosch
Harvard Kennedy School
Cambridge, Massachusetts

The Issues and Events report on the viability of geoengineering to counter global warming did not address the ethical issue. I use the following fable to illustrate the point.

Once upon a time in an idyllic country, near a small town and a farming community, a rope hung out of the sky. One pull on the rope changed the weather from fine and sunny to cloudy and rainy, and the next pull changed it back. For many years the people cooperated; the farmers used the rains to help grow crops, and the townspeople enjoyed the sunny periods. But there came a time when the townspeople protested the rain and wanted more sunshine. The farmers were concerned about their crops. And so arguments broke out, with a person from the town pulling on the rope, followed quickly by a farmer pulling it again, and they pulled and pulled and . . . broke the rope.

Given that the climate is changing because of inadvertent consequences of human activities, the question arises as to whether efforts should be made to deliberately change climate to counteract the warming. Aside from the wisdom and ability to do such a thing economically, the more basic question is the ethical one of who controls the rope. Who makes the decision on behalf of all humanity and other residents of planet Earth to change the climate deliberately?

Climate change is not necessarily bad. The climate has always varied to some degree, and changes have occurred over decades and millennia. Humans and other creatures have adapted to the changes or perished; it is a part of evolution. Changes projected with increased greenhouse gases in the atmosphere may have some aspects that could be regarded as bad; increased heat waves and wildfires in summer, increased and more intense droughts, heavier rains and risk of flooding, stronger storms, decreases in air quality, and increases in bugs and disease are all likely threats. But in some areas, climates improve, high-latitude continents become more equable, growing seasons are longer, and so on. There are winners and losers. And it is possible to adapt to such changes—at least if the changes occur slowly enough. In other words, key issues are the rate and duration of change, perhaps more so than the nature of the new climate. In that sense, it is the disruptive part of climate change that might be argued as being bad.

Given that climate change is not universally condemned, how can anyone justify deliberately acting to change the climate to benefit any particular group, perhaps even a majority? The ethical questions associated with climate manipulation loom so large that some forms of geoengineering are simply unacceptable. The forms that are acceptable include those that reduce emissions and mitigate the rates of change or reduce the amount of carbon dioxide in the atmosphere. Forms that propose to block sunlight in some fashion, perhaps to emulate a volcanic eruption, would change the hydrological cycle and weather patterns in ways that would be simply unacceptable, even if they were doable. The cost and viability of any such proposals are other major issues, but in my view, they are overwhelmed by the ethical considerations.

Kevin E. Trenberth
National Center for Atmospheric Research
Boulder, Colorado

Leer toda la nota...

viernes, 13 de febrero de 2009

¿Para que los números primos?

Científicos de la Universidad de Cambridge y el Consejo Superior de Investigaciones Científicas (CSIC - Madrid) han dado pasos para demostrar la Hipótesis de Riemman, uno de los problemas matemáticos del milenio, vinculado con el reparto de los números primos y cuya solución está premiada con un millón de dólares.


Hola compañeros, aca esta una noticia interesante sobre la distribución de los números primos dentro de la serie de números naturales. Aun no se sabe como estan distribuidos los números primos -aquellos que son sólo divisibles por 1 o por sí mismos-. Esto es, no se ha podido encontrar una ecuacion que nos diga cual el el siguiente numero primo, como lo hacemos para los numeros pares con la relacion: 2n; o para los impares: 2n + 1.

Segun un diario español, hay avances en esta investigación: Los nuevos avances han sido publicados en la revista "The physical review letters", donde los investigadores han propuesto un modelo basado en la física cuántica, un modelo que, aunque aún es incompleto, "podría ser la clave para la demostración de la hipótesis".

Desde hace algunas décadas los científicos sospechan que es posible demostrar la hipótesis de Riemann desde la física.

Los investigadores Germán Sierra, del Instituto de Física Teórica(Universidad Autonoma de Madrid) , y Paul Townsend, de la Universidad de Cambridge, proponen un modelo en el que un electrón es sometido a determinados campos electromagnéticos.

El modelo "es aún incompleto, aunque pensamos que es un buen punto de partida para una posible demostración física de la hipótesis y puede estimular el trabajo de otros investigadores", ha opinado Sierra.

Sin embargo, esto no llevaría a una demostración de la hipótesis, que debe hacerse en términos exclusivamente matemáticos.

¿Pero que es la dichosa Hipótesis de Riemann?

La hipótesis de Riemann fue formulada en 1859 por el matemático alemán Georg Friedrich Bernhard Riemman y aunque de manera algo compleja, está directamente relacionada con los números primos y su pauta de distribución a lo largo de la serie de números naturales.

Como no era una parte central de su investigación, el propio Riemann obvió su demostración y desde entonces, la comunidad matemática ha intentado hacerlo sin éxito.

En el año 2000, el Instituto Clay de Matemáticas (Estados Unidos) la incluyó como uno de los problemas del milenio, ofreciendo UN MILLON DE DÓLARES a quien la demostrara.
-----------------------------------------------------------------------------
La hipótesis en sí se deriva de la llamada función zeta de Riemann, que se define como la suma de los inversos de los números enteros elevados a una potencia que se llama habitualmente "s".
-----------------------------------------------------------------------------
Al alimentar esta función a veces el valor resultante es cero, siendo alguno de estos ceros fáciles de predecir y otros no.

Lo que Riemann intuyó -en esto consiste su hipótesis- es que todos están alineados sobre una misma recta de un plano y descubrió que la posición de los ceros determina la posición de todos los números primos.

¿Como ayudaria el modelo físico?

En el modelo ahora propuesto los niveles de energía del átomo coinciden, en término medio, con la posición de los ceros de la función zeta de Riemann, aunque aún no es capaz de determinar su posición exacta.

Tomado de: http://www.diariodemallorca.es/secciones/noticia.jsp?pRef=2009020700_18_433739__Ciencia-solucion-problemas-matematicos-milenio-esta-cerca

Leer toda la nota...

lunes, 12 de enero de 2009

EVENTOS ASTRONOMICOS PARA EL 2009

Tendremos el privilegio de disfrutar de varios fenómenos astronómicos este 2009, en especial de las denominadas “lluvias de estrellas” que, incluso, se observarán a simple vista en diversos meses del año, así como dos eclipses penumbrales de los seis que habrá en el mundo, anticipó el astrónomo Juan Antonio Juárez Jiménez, del Planetario Luis Enrique Erro, del Instituto Politécnico Nacional (IPN).

Para el próximo 23 de abril que se aprecie la lluvia de estrellas denominada Líridas; el 7 de mayo podrán observarse las Eta-Acuáridas; el 30 de julio las Delta-Acuáridas, y las Perseidas el 12 de agosto. Estas últimas serán las más importantes, porque se estima que habrá más de 100 meteoritos por hora.

La mejor hora para apreciar estos fenómenos en nuestro país será de las 03:00 a 05:00 de la mañana en los días señalados. Sin embargo, la observación clara de todos los fenómenos astronómicos dependerá de las condiciones climatológicas.

En el caso de las Perseidas, Juárez indicó que lamentablemente se presentará en época de lluvias, lo cual podría impedir una buena visibilidad para su observación, situación que puede propiciar que la población se desaliente al no observar claramente los fenómenos astronómicos.

Para el segundo semestre, también se esperan fenómenos similares. El especialista subrayó que para el 21 de octubre se presentará la “lluvia de estrellas” Oriónidas, con aproximadamente 23 objetos por hora.

El 18 de noviembre se prevé la presencia de las Leónidas, cuya intensidad oscilará entre 120 y 500 meteoritos por hora. “Se está esperando una tormenta de lluvia de meteoritos de las Leónidas, y se tiene la expectativa de que sea una de las más importantes de este año”, precisó. Las Geminidas se podrán observar el 14 de diciembre, con 120 meteoritos por hora.

De los nombres que se dan a las lluvias de meteoritos, dijo que se toman de las constelaciones que se observan en la bóveda celeste.

Según un comunicado del IPN, el pasado 3 de enero se presentó una lluvia de estrellas procedente de las Quadrántidas, con una presencia de más de 120 meteoritos por hora.

Juárez recordó que la “lluvia de estrellas” se genera cuando partículas de la galaxia entran a la atmósfera terrestre y su fricción las incendia. De ahí que se vea iluminado el cielo y constituya un espectáculo digno de apreciarse.

Ref. http://www.jornada.unam.mx/2009/01/12/index.php?section=sociedad&article=035n3soc
--------------------------------------------------------------------------------------
EVENTOS ASTRONOMICOS (Centro de Radioastronomía y Astrofísica, UNAM-Morelia)

http://www.crya.unam.mx/~r.franco/eventos.html

Leer toda la nota...

miércoles, 7 de enero de 2009

un buen propósito para el Año Nuevo: como dar una mejor plática

Todos hemos visto pláticas de excelente y de muy mala calidad. Afortunadamente, no podemos (o tenemos) que ver nuestras propias pláticas, así que nos quedamos con la idea (seguramente injustificada), que todo estuvo bien. Este articulo, publicado en Physics Today, puede ayudarnos a todos mejorar nuestras pláticas, y recomiendo mucho su lectura. Especialmente a los estudiantes y a los investigadores jovenes.


http://ptonline.aip.org/journals/doc/PHTOAD-ft/vol_61/iss_12/49_1.shtml

Who is listening? What do they hear?
Physics Today December 2008

In communicating our science, have we put too much emphasis on the information we want to convey? Perhaps there is another way to think about it.
Stephen G. Benka
December 2008, page 49

This past summer, at a large international scientific meeting where every contributed talk was allowed 20 minutes, I wandered into a session that seemed intriguing but dealt with a topic about which I knew nothing. After a few hours, I had heard several incomprehensible talks, a couple that justified my intrigue, and one from a fellow who spent 15 of his 20 minutes enumerating the things that he would not include in his talk. Some months earlier, I had given a colloquium in a physics department where I had a number of friends. My talk was a flop; I carried on about many things that interested me but not them. The following week, for another colloquium at a different university, I used the same title but gave a completely reworked talk, and it was very well received. All of which raised for me the following question: What really makes a talk good? Ruminations in that vein led to my giving an invited talk this past summer in Edmonton, Canada, at a meeting of the American Association of Physics Teachers. My title was, "It's the Audience, Stupid!" and I was asked by several people to write it up. This article is the result.
Most of us have heard some standard communication tips that are often treated as dogma, such as, "First, tell them what you're going to tell them. Then tell them. Finally, tell them what you've told them." Such advice can be useful, but it won't guarantee a successful talk. It might even encourage some of us to think one-dimensionally: Here I am in front of these people, loaded with information, worrying primarily about how best to get that information "out there" where it will be appreciated. It is all about me and my information. But what about those on the other end? How does the information appear to them? Each member of the audience brings to the room not only a unique background and set of expectations, but also a unique comfort zone of knowledge. Each will see the information through the prism of individual and professional experience. What will attendees really hear? How does one measure "success" for a talk?
One perspective on success that I find helpful was offered in this magazine back in July 1991. James Garland wrote
Whenever you make an oral presentation, you are also presenting yourself. If you ramble incoherently, avoid eye contact, flash illegible transparencies on a screen, and seem nervous and confused, then your colleagues are not only going to be irritated at having their time wasted, they're also going to question your ability to do your job. However, if you present your ideas clearly and persuasively, with self-assurance and skill, you will come across as a reasonable, orderly person who has respect for the audience and a clear, insightful mind.
So how does one actually assemble a compelling, successful talk?

Two interacting systems
The ability to communicate effectively is unevenly distributed among humanity. Never has an infant been born and immediately begun to deliver great oratory. A newborn needs both time and effort to learn to communicate, never mind the much later accomplishment of speech. As they age, however, many people seem to talk more and communicate less. Of course, we scientists take it for granted that everyone hangs on our every word, all the time, whenever we speak. Right? Would that it were so. Unfortunately, we all need to continually learn, relearn, and refine our communication skills. Scientists are no exception. Whether naturally tongue-tied or golden-voiced, each of us can benefit by routine practice and honing of our communication skills.
Sometimes we talk and write about our work, whether we want to or not, because doing so is part of our professional lives. Other times, we seek opportunities to talk or write about something of particular importance to us. My underlying premise is that for all communication, we want somebody else to actually understand what we are trying to convey.
Communication involves two systems—a supplier and a recipient—that interact via the information passing between them. Both systems are essential. Without the supplier of information, be it a speaker or an author, the recipient is frustrated in the search for knowledge. Without the recipient, the supplier is pointless. Yet many speakers and authors never give the audience more than a passing thought. In my opinion, effective communication uses information to move an audience from an initial mixed state of knowledge to a final state of understanding.
As scientists, we are naturally intrigued by new developments, curious about new results, gratified when others accept our own research as important. For many of us, the easiest way to communicate results is via the dry, impersonal, just-the-facts journal article in our particular field. It is a fair assumption that those who read the article are already reasonably well versed, perhaps truly expert in the field being discussed. And so we become comfortable throwing around specialized vocabulary, diving right into the technical details of our work, and never really thinking about our readers. But what of the curious scientist who wants to learn something new, perhaps even change fields, and turns to the article? Without being aware of it, our tendency is often to let the neophytes fend for themselves. That tendency can too often spill over to other venues—talks at scientific meetings, department colloquia, and even casual conversations with our neighbors and friends.
Here, I want to turn upside down the assumption that in communicating science, information is paramount. Instead, let's examine the reverse premise, that determining the actual information to convey is secondary to ensuring that it be understood. Let me say it again: It is far better to be understood by your audience—even if you convey less information than you hoped—than to convey everything you intended and be incomprehensible. I am not suggesting that the information is unimportant or to be treated sloppily: The candid delivery of accurate information is a necessary but not sufficient condition for an effective presentation, whether written or oral.
Although this article is focused on giving talks, most of the main points can be easily adapted to the written word. For every talk and many papers, there are three major considerations: audience, audience, and audience. Identify the audience. Respect the audience. Engage the audience.

Who is your audience?
All audiences are not equal. Even roomfuls of physicists differ. If everyone present is an expert in your topic, then your job is simple. With the briefest of introductions to place your talk in context, you can launch right into a technical discussion, throwing jargon around like pieces of candy, knowing that everyone will enjoy the treat. Groups of experts in any specialized field are typically small with most individuals, including friends, adversaries, collaborators, and competitors, known to each other. In that situation, your best preparation is merely to master your subject.
Of course, not all physicists, let alone all scientists, will be experts in the given subject. When the audience broadens to include people from other specialties, the talk must also broaden to include them. No longer will everyone know all of the specialized vocabulary. No longer will each listener know the nuanced arguments and assumptions that lie behind "well-known" results. And no longer will everyone grasp the importance of the work and how it fits into the larger framework. What if the audience is broader yet, and includes nonscientists? What if you are giving a public talk? Or speaking to a class of schoolchildren? You wouldn't tell an eight-year-old about the Dirichlet conditions required for a Fourier expansion, would you? Sadly, experience suggests that some physicists would.

Vefarps, wotoiks, and two keys
To unlock minds and promote understanding in a mixed audience, two keys are needed. The first is to provide the audience with an appropriate context for the talk. Experts need little context. For example, let's say you've come up with a very clever "vefarp," a vital element for a research project. The research project—of which the vefarp is but one vital element—is actually the world's only thing of its kind, a "wotoik." Your vefarp could be a piece of equipment, a computer program, an equation, a concept, whatever. The point is that it will introduce highly significant improvements to the wotoik. In an advanced seminar, you would present the finished vefarp to your collaborators in all its glorious detail: the current shortcomings of the wotoik, the stumbling blocks to a solution, the sophisticated insight for the vefarp, the nitty-gritty development of that insight into a reality, the moment of truth, and the bright hope for the future. The vefarp excites your colleagues as it excited you because the long-awaited wotoik is now nearly ready to be put to use.
Now let's ask, Could that same presentation be given to a broader scientific audience? Of course it could. But then we must be prepared to see blank faces, fidgeting, and general frustration in a dwindling audience; the listeners won't all have the background to understand the details of the vefarp, and so they won't grasp its importance, perhaps not even extract the larger purpose of the wotoik from the details provided. For a more general audience, we must rethink the talk from the bottom up, based on our understanding of who is actually in the audience. It is crucial to lay the groundwork so that nonexperts can appreciate the significance of what we say.
For the mixed audience, context is everything. There is a real danger of getting trapped into trying to impress the experts and thereby alienating and confusing everyone else. And there is always a chance that someone in the room will some day have a hand in advancing your career. So do your best to give everyone present something to latch on to, some understanding to take away, an appreciation of why you are so excited about the work.
To include more context and promote understanding, you will probably need to jettison some other material, perhaps many of your favorite details. It may help to remember that every talk both succeeds and fails, in various ways, with different members of the audience. In essence, the problem of developing a good talk is one of optimization: choosing the most appropriate information for the given audience and delivering it effectively.
How do you decide which information is appropriate? The answer lies in the second key: to carefully choose your take-home message. Ask yourself, If I were an "average" member of the audience, neither novice nor expert, what would I hope to learn from the talk and what should I come away with? If you do your job well, the audience will automatically learn how brilliant you are both as a scientist and as a speaker, so self-promotion or showing off need not be your goal. The secret is to choose a take-home message that most of the audience can appreciate and that serves your field well. Fit your take-home message into the scientific edifice of the field.

Into the unknown
In a talk, we are free to include information of any kind but making careful, deliberate choices will pay big dividends. Remember that we are taking our listeners into unknown territory. As their guide, we have the responsibility to see that they don't lose their bearings. Start with the audience's common experience, the one thing that unites them in that room on that day. Use that commonality to deduce what they probably already know, and thereby establish the largest context. If half of them never heard of a wotoik, let alone the crucial vefarp, then start by telling them about the project of which the wotoik is an important part. It may be that even the reason for the project is a mystery to many in the audience. In that case, explain the grand quest, pose the questions being pursued by several projects, each in their own way. Only then can your listeners follow you down the path of the specific project that needs the wotoik that the vefarp so brilliantly enables.
Obviously, time is limited. Therefore, to provide the best education for listeners, I try to think of a talk as an information funnel: Starting with a wide enough context to encompass all members of the audience and explaining unfamiliar concepts and vocabulary along the way, I attempt to bring them along on a journey to the take-home message. The shorter the talk, the taller the challenge. There are at least two viable ways to meet that challenge: Eliminate nonessential technical details and broaden the take-home message. Both routes result in more of an overview than an advanced seminar, and by fine-tuning the level of detail and the bottom line, almost any audience can be appropriately addressed, even in a 10-minute talk.
It seems paradoxical that not talking about those details on which you worked so hard can improve your talk. But keep in mind that experts won't object to being told what they already know, while nonexperts loathe being told what they can't understand. Your thorough knowledge of every detail will be inferred if you show an understanding of the subject, and that detailed knowledge can shine brightly during the question-and-answer period. For some audiences, the vefarp might be utterly irrelevant. Then there is no reason even to mention it, despite all the hard work that went into it.
Even while stepping up to the front of the room, I try to have the take-home message in the forefront of my mind. I try to present the opening context with my take-home message in mind. I try to include only those details that have a direct bearing on the take-home message. From start to finish, it's all about, you guessed it, the take-home message. After all, that is why we give talks. So here is some advice: Recognize that your talk is not about you; it is about whatever your audience needs from you. Before preparing and delivering your next talk, write this little cheat sheet on your hand, as I now do, paraphrasing a 1992 political campaign's cheat sheet: It's the audience, stupid!

Respect
Very few of us are professional speakers; I certainly am not. But we are professionals nonetheless, and being a professional means showing respect for the audience. That respect includes more than just giving an appropriate talk, with appropriate context and an appropriate take-home message. As speakers, we have asked the audience to take time out of their busy schedules to listen to what we have to say. They don't have to come and many don't. But those who do attend have a justified expectation of learning something for their trouble.
To ensure that a talk goes smoothly, a speaker must be prepared technologically. Were the slides delivered in advance? Is the equipment in the room familiar or is a quick dry run needed? If necessary, can you switch smoothly from the slides to a video and back? Are any needed audio files you will use readily available; is the sound connected properly, with the volume set to a suitable level? Will you use a microphone; if so, what kind? Will you be able to walk freely? Do you have a pointer?
A speaker must always be punctual. Many of us have been in sessions at which a speaker failed to show up or came in at the last possible moment. Such behavior disrupts the flow of the session, distracts the attention of the audience, dismays the chair, and disrespects everybody present.
You must always—always!—stay within your allotted time. The worst transgression a speaker can commit, the most disrespectful act, is to exceed the time limit. Here is what occurs when a speaker goes overtime: The following speakers are delayed and become annoyed; the session runs long and the audience becomes annoyed; the chair is perceived as incompetent and becomes annoyed; people who session-hop for specific talks are thrown off schedule and become annoyed; and worst of all, the offending speaker is perceived as unprofessional and disrespectful. In such a situation, the speaker sends a strong message that nobody else matters. It is a situation in which everyone loses.

The engagement
Having carefully selected the information that will funnel listeners to the take-home message, that information still needs to be effectively conveyed. To engage an audience, a speaker must first engage him- or herself, recognizing the importance of time management, legible slides, a fluid narrative, and a clear delivery.
A rehearsal is essential. With a timer. Out loud—though I've done it under my breath on airplanes. If you are bashful, practice it by yourself. Far better, practice it in front of family or friends, preferably without telling them in advance what the talk is really about. See if they get it. If you are anything like me, the practice session will reveal some significant flaws—it runs too long, the take-home message is unclear, some piece of logic or storyline is missing or garbled, proper credit was not given to others, and on and on. A practice session is a golden opportunity to identify the problems and solve them. If you haven't set the stage completely, add some more context. If there is extraneous material, get rid of it. If your message isn't clear, sharpen it. If you stumble on a detail, rephrase or eliminate it. Practice pronouncing difficult words. If a slide is cluttered or muddled with poor colors, fix it. If your transition to audio or video is not seamless, streamline it. Then do another dry run. Are you now within your allotted time, proceeding smoothly from audience-specific context, through clear explanations of the details, to the desired conclusion? If not, another iteration is needed.
I vividly recall delivering my first scientific talk, more than a few years ago. I was a nervous wreck, mumbled quickly at the screen or at my shoes, aimed a pointer that had a life of its own, dropped my transparencies. The nightmare finally ended, I fielded a question or two and collapsed into my chair. When asked later if I had practiced, I said yes, but the reality is that my practice was not meaningful; it consisted merely of seeing if my slides were all in one place.
If you are an experienced speaker, a dry run will help ensure that you stay within the time limit. If you are a relatively new speaker, you might not realize how tremendous the benefits of a real rehearsal can be. With each run, your presentation will gain clarity and you will gain confidence. With that confidence, you can concentrate on actually engaging the audience, not just surviving an ordeal. You will be more comfortable making eye contact. Asking questions, even rhetorical ones. Speaking up and speaking clearly. You will more easily discover the joy of being multilingual, using language that is expert-friendly, novice-friendly, or public-friendly. In short, you will learn to recognize your talk for what it is: an experiment designed to bring the audience from a mixed state of knowledge to a final state of understanding with you as the best instrument for the job.

Stephen Benka is the editor-in-chief of PHYSICS TODAY.

Leer toda la nota...