Thursday, November 17, 2005

Development and Utilization of X-Ray Technology

by David Smoot (Class of '05)

X-Ray technology is a form of medical imaging utilized around the world to evaluate diagnoses and cure disease. Each day, hundreds of thousands of diagnostic X-Rays are performed in the United States. X-Rays were originally invented and created by Sir Wilhem Roegntan in 1897. He developed the X-Ray by making a special cathode ray tube that emitted a new wave frequency or energy packet. Once the X-Ray was produced, it was then illuminated onto a special type of film making an image. Doctor Roegntan decided to experiment with his X-Rays and used a person’s hand in one of his experiments. When the film of this X-Ray was produced, Doctor Roegntan realized that the resulting image was a picture of the bones in the hand. Interesting enough, when the Doctor took the X-Ray, the person involved in his experiment was wearing a wedding ring. Doctor Roegntan recognized that the X-Ray could not penetrate the ring and was the reason why the film came out white around the man’s wedding ring finger. A new technology was born. All of a sudden people could see what was going on inside the mysterious human body.

X-Ray technology is a fancy form of photography. Just like photography is about light exposure, X-Rays are about light exposure and density. Depending upon the different densities and the level of transparency, the X-Ray beam’s encounters will determine the darkness of the image produced. For example, lungs appear black and the bones appear white. The lungs appear black because the X-Ray energy can pass through the lungs density and therefore the image can be developed. The bones are extremely dense and the X-Ray energy cannot pass bone thus not allowing the film to develop where the bones are. A black and white image is made, and due to the organ densities, radiologists can determine which organ is which. This is the basis of X-Ray technology.

Today there are many other sophisticated methods of producing an X-Ray image. Instead of using black and white film, X-Rays are now taken digitally and viewed on high-resolution computer screens. How ironic? Now that we are in the digital camera era, we have decided to go digital in the medical fields. In addition to the digital methods is a method that involves three-dimensional imaging. Three-dimensional images can be utilized using CAT (Computer Assisted Tomography) Scan techniques. In this technique, an X-Ray tube is rotated around a person or any type of object. The tube is rotated in a continuous fashion. A computer then collects all of the images that are made. This information is then plugged into a special mathematical algorithm that then produces a three-dimensional image. As most people predict, a 3-D image provides an exponential amount of diagnostic information as compared to a 2-D imaging system. All of a sudden, internal organ relationships are shown in complete detail. Regular X-Rays only show an image in a XY plane and are therefore less diagnostic.

X-Ray technology is a little over 100 years old. The utilization of X-Ray’s in medicine has revolutionized the physician’s ability to make an accurate diagnosis.

References:

1600 x 1200? No Problem!

Vector Graphics and their applications

by Paul Reny (Class of '05)

Ever find an image online somewhere, and think “Wow, that would look good as my desktop background!” only to find that alas, when the image is stretched to cover your beautiful 1600 x 1200 pixel display, it is distorted and blocky. Well, that problem is easily solved with vector graphics. Distorted images be gone, vector graphics are the way of the future, despite the fact that they are still somewhat in their infancy.

Vector graphics, also known as object oriented graphics, are images that are completely described using mathematical definitions. They are constructed using mathematical formulas describing placement, shapes, and colors. Unlike Bitmap images, vector graphics contain shapes, lines, text, and curves, which all together form a picture. Each individual line is made up of a few control points that are connected using Bezier curves. A Bezier curve in its most common form is a simple cubic equation that can be used in any number of ways. Originally developed in 1970 by Pierre Bezier for computer aided design, it has become the foundation of the entire Adobe PostScript drawing model. Using the minimal amount of control points to draw curves, this model is a very space efficient way of drawing images.

In addition to the simplicity with which they are drawn, altering vector graphics is a very simple process, as the shapes and components within them can easily be ungrouped and edited separately. Due to their use of mathematical formulas, vector graphics based images are easily scalable without any loss in quality, up to 6400%. This is a vast improvement over bitmap images, where the slightest enlargement starts to distort the image greatly, even given recent improvements in anti-aliasing. Therefore vector based images are ideal for logos, maps and other objects which are resized frequently. However, due to the nature of the way data is stored in vector graphics based images, they are not well suited to complex images such as pictures. The lack of well defined shapes and lines makes it very difficult to create vector graphics based images from photograph type images. This is the one major shortcoming that vector based graphics images have. An additional minor problem with vector based images is that there is not one uniform file type, such as GIF or JPEG file types for bitmaps, which is used for all vector graphics. Fortunately, it is relatively easy to convert vector based images into bitmaps, although much more difficult to do the opposite conversion.

Since vector based images only need to store instructions for drawing an image, i.e. the data for the few points of the Bezier curves, and not data for each individual pixel, a vector graphics file is significantly smaller than a bitmap file. Often, vector data stored in EPS format, which includes a bitmap preview in addition to the Bezier data, is actually smaller than th e preview data. Other vector graphics formats are PICT, EPS, and WMF, in addition to PostScript and TrueType fonts. Common programs used to draw vector graphics images are Adobe Illustrator, Corel Draw, and Macromedia FreeHand. While bitmaps will continue to dominate the photo image industry, vector graphics are by far a superior choice for simple, scalable, compact size images.

References:

Light Pollution

by Sarah Flannery (Class of '05)

Astronomers and stargazers worldwide face the threat they will no long be able to see their beloved night sky because of light pollution. Light pollution is defined as the “glow of light we see at night above cities and towns,” according to the Ontario Hydro Leaflet on Light Pollution. (Ontario Hydro is a progressive electrical utility for Ontario, Canada. It’s caused by poorly designed outdoor lights, such as street and parking lights. The light spills to the side and glows upwards, making the stars less visible. If lights were designed to minimize upward light leakage, light pollution would not be a problem. Light pollution consists of five main components: glare, light trespass, clutter, energy waste, and urban sky glow. Glare blinds us and harms visibility. Light trespass is when somebody’s outdoor lights spill out, or “trespass” onto other people’s property. Finally, urban sky glow destroys our view of the universe. Light pollution has the potential to blot out the night sky in just a few generations from now if we do not do something. Fortunately, there are many steps that even the average person can take to stop light pollution from taking the beauty of the night sky away from us.

The average person can help by investing in quality, efficient lighting for their homes. The light from inside buildings is one of the six major causes of light pollution. Good lighting is well-shielded, so that the light uses just enough light but not too much, and uses lighting sources that are energy efficient. Other things that will help are turning off flights when they are not needed, and using night lighting only when it is really necessary. Since most of the problem of light pollution lie outdoors, local governments and businesses have a big responsibility to reduce light pollution, because they can do the most to help reduce it.

Five out of the six major causes of light pollution can be prevented by businesses or governments. These five causes are: public, street lighting, the lighting from cars, the floodlighting of advertising signs and buildings, the floodlighting of sports stadiums, and security floodlighting. To start with, outdoor light should be regulated, so that the glow does not go to an extreme. If noise pollution can be regulated, then light pollution should be regulated as well. Outdoor fixtures should be shielded just like indoor fixtures are, to prevent spillage of light into the night sky. Billboards and signs should be lighted from the top down instead of the bottom up. When installing outdoor lighting , some things to keep in mind are to make the illumination even, and to only use light where and when it is needed. These are all simple solutions to a massive problem.

References:
All info obtained from information sheets on www.darksky.org

Wednesday, November 16, 2005

The Evolution of Robotic Surgery--Success and Possibility

by Hannah Shakartzi (Class of '05)

Who ever thought that surgeons would be able to perform surgery from around the world? Developments and the evolution of robotic surgery over the last five years have improved medicine and have made telesurgery possible.

Robotic surgery first made its debut as part of the da Vinci Surgical System. The da Vinci System first enabled surgeons to perform endoscopic- minimally invasive surgery, without physically maneuvering surgical tools.

It did so with the use of a viewing and control console along with a surgical robotic arm unit which performed the surgery.

A surgeon was able to look at the viewfinder to see magnified 3D images of the surgical site and control the surgery, while sitting at the console. The 3D images were provided by an endoscope, a small surgical camera in the patient controlled by one robotic arm. The surgeon used foot pedals to control the camera, and used hand movements to adjust and reposition the remaining two, robotic arms, performing the surgery. Electrical signals enabled the robotic arms to mimic the surgeons hand movements.

The Zeus Robotic Surgical System is another surgical aid that has also contributed to the advancement in medicine and the idea of telesurgery. It was cleared in October 2001.

The Zeus Robotic Surgical System consists of three robotic arms that are mounted on an operating table. It has a computer work station, video display, and hand controls which move the three robotic arms that maneuver the surgical instruments. One of the robotic arms is called the Automated Endoscopic System for Optimal Positioning, or otherwise known as the AESOP. The AESOP, like in the da Vinci System, maneuvers the endoscope. However, unlike the da Vinci System, the AESOP and the robotic system itself is voice-activated. The two systems are similar in the way that they function.

The two systems have improved medicine in a variety of ways. Both of the robotic surgery systems provide greater precision and control when dealing with surgical tools. The system provides a greater depth perception and viewing of the surgical site than a human eye could see due to the magnifying lens of the camera. The robots enable smaller scale work which conventional surgery permits, and also eliminate tremors from fatigued surgeons. Fewer surgeons are also needed in the operating room because of the robots assistance. Another benefit is that robotic surgery requires less incision, lowering blood loss, reducing trauma, ultimately resulting in a quicker recovery.

The da Vinci Surgical System, the Zeus Robotic Surgical System and the AESOP, have most importantly led to the development of telesurgery.

Scientists soon began to realize that if surgery could occur from meters away, than it was also possible from miles and even continents away. With the use of the Zeus Robotic Surgical System, and high speed fibre optic networking, a surgeon in the United States removed a gall bladder of a sixty-eight year old woman from over 7,000 miles away, in Strasbourg France, on September 17, 2001. Operation Lindberg was the first telesurgery operation in the world.

Operation Lindberg was a medical success. It showed that people would be able to receive surgery from surgeons around the world. Telesurgery could serve beneficial to people living in impoverished nations, and could lower the price of healthcare due to minimizing travel. It could also serve as a model to provide astronauts with surgery on long missions, along with aiding wounded soldiers in battle.

The da Vinci Robotic System and the Zeus Robotic System served as the basis for the advancement of surgery around the world and has provided science and medicine with endless possibility.

References:

Is Global Warming Good For Business?

by Brian Lipson (Class of '05)

Global warming is steadily melting the Arctic ice cap each year with some surprising side effects. The ice cap reached its lowest level ever recorded this last summer, and as the ice melts, new land and seas become uncovered. The newly exposed lands are potentially hiding valuable resources. Arctic nations are competing to claim the territory, fighting for possible new waterways for trading routes and oil drilling sites. However there is a rule about claiming territory. Countries are only allowed to claim territory that is part of the continental shelf off each respective coast. As a result, Arctic nations are scrambling to explore and map the topography of the ocean floors in the Arctic to find ocean floors with elevations that can be considered part of a continental shelf. The countries involved are the United States, Russia, Canada, Denmark, and Norway (the only countries with coasts on the Arctic Ocean). The expeditions sometimes include icebreakers to explore possible drilling sites, while others include sending scientists to map out the topography of the ocean floor for claming territory. Currently, Canada and the U.S do not agree on the ownership of the Northwest Passage, a potential major trading route in Northern Canadian waters that could open up due to melting ice. Canada claims it is theirs while the U.S. says it is international water. Canada is making efforts to reconnect with indigenous villages in the arctic to rally them in support of Canada's claims to the new lands. If it weren't for global warming, Canada would be paying no attention to its arctic lands, but due to the potential financial gain, the Canadian government now is taking measures to secure its northern territories. Norway is especially eager to claim oil-drilling sites in order to become a huge supplier of oil to countries in need like the United States.

Overall, there are several possible benefits of the melting ice. On the business side, new waterways that open up for trading vessels could make trade much more efficient. For instance, a trip from some ports in Russia to midcontinental North America usually takes around 17 days, taking a route through Canada via the St. Lawrence River and the Great Lakes. However, if ports like the one in Churchill, Manitoba (on the coast of Hudson Bay) have longer shipping seasons, ships could take advantage of the route that only takes about eight days (as is the case in Churchill). From there, railroads could transport goods the rest of the way. In the process, Churchill would become a hub for trading vessels, vastly improving the economy and business there. Also, if oil sites were discovered, major sources of new oil would make gas prices drop. The new and faster trade routes would reduce product costs because of more efficient trading and would increase business in new port locations. Cruise companies would have new routes to send customers on. Some indigenous villages in the arctic would have more visitors due to the longer shipping seasons. Tourist visits to arctic villages are vital to the economies, sometimes garnering as much as $40,000 American dollars per visit. As of right now, these tourist visits are rare, and only about five come annually. A continued thawing could increase the visits. A positive environmental side effect could be that new sources of oil could decrease the need for oil drilling in places like Alaska and end political arguments over drilling on fragile land on the North Slope.

It is only fair however to mention the negative consequences of global warming. Increased shipping in areas makes oil tanker accidents more likely. Not only are oil spills catastrophic for the environment, but on the business side, fishing communities such as ones in Norway are worried that an oil spill would wipe out the fishing industry (especially because oil spills take longer to clean in cold water due to less wave action). If the ice in the arctic continues to melt each year, ocean levels could increase to the point where coastal human settlements could be flooded. In addition, other more urgent effects like severe weather and changing climates could be having serious impacts on humans and animals. Many animals rely on the winter ice pack for food. For instance, polar bears use ice packs to hunt seals, but significantly reduced ice packs are starting to starve polar bears. A possible negative business side effect is that severe weather and climate changes could ruin crops and agriculture in general. Also, migrating fish, namely salmon, are moving farther and farther north for cold waters. Fisheries may have to relocate and move north to keep up with the migrating fish. Finally, in addition to the toll on humans that severe storms have, the cost for insurance and recovery efforts from severe storms could total around 150 billion U.S. dollars per year in the next decade alone.

References:
  • The New York Times: "As Polar Ice Turns to Water, Dreams of Treasure Abound." By Clifford Krauss, Steven Lee Myers, Andrew C. Revkin and Simon Romero. Part 1 of "Big Melt" series, from October 10, 2005.
  • The New York Times: "Old Ways of Life Are Fading as Arctic Thaws." By Steven Lee Myers, Simon Romero, Clifford Krauss, and Andrew C. Revkin. Part 2 of "Big Melt" series, from October 20, 2005.
  • Wikipedia: "Global Warming (effects)"
  • The Boston Globe: "The Heat is on." By Beth Daley, Globe Staff November 14, 2005

How do hydrogen fueled cars work?

by Gareth Lewis (Class of '05)

Fuel cells have been in the news a lot lately. According to news reports, we may soon be using the technology to generate electrical power for our cars. The technology is appealing to people because it offers a means of making power more efficiently and with less pollution. But how does it do this is the question?
What is a Fuel Cell?

A fuel cell is an electrochemical energy conversion device, or a device that converts the chemicals hydrogen and oxygen into water, and in the process it produces electricity. Another electrochemical device that we are all familiar with is the battery. A battery has all of its chemicals stored inside, and it converts those chemicals into electricity. This means that a battery eventually dies and it needs to be thrown out or recharged. With a fuel cell, chemicals are constantly flowing into the cell so it never dies. Most fuel cells in use today use hydrogen and oxygen as the chemicals. A hydrogen car uses hydrogen as its primary source of power for locomotion. There are two main methods: combustion or fuel cell conversion. Fuel cell conversion is what has been in the news most recently and is expected to power the cars of the future.

In fuel cell conversion, the hydrogen is turned into electricity through fuel cells, which then powers electric motors. The proton exchange membrane fuel cell (PEMFC) is one of the most promising technologies. It transforms the chemical energy liberated during the electrochemical reaction of hydrogen and oxygen to electrical energy. A stream of hydrogen is delivered to the anode side of the membrane-electrode assembly. At the anode side it is catalytically split into protons and electrons. This oxidation half-cell reaction is represented by:

H2 _ 2H+ + 2e-

The newly formed protons filter through the polymer electrolyte membrane to the cathode side. The electrons travel along a circuit to the cathode side of the membrane-electrode assembly. This creates the current output of the fuel cell. Meanwhile, a stream of oxygen is delivered to the cathode side of the membrane-electrode assembly. At the cathode side oxygen molecules react with the protons permeating through the polymer electrolyte membrane and the electrons arriving through the external circuit for form water molecules. This reduction half-cell reaction is represented by:

4H++ 4e- + O2 _ 2H2O

How a fuel cell works: Simplified

At a glance these equations look relatively simple. They are however not easily done and not easily explained, as you just read. This is a simple way of explaining the reactions. On one side (Anode side), hydrogen is channeled through to the anode. An anode is the electrode in a device that electrons flow out of to return to the circuit. At the same time oxygen from the atmosphere is channeled into the other side called the cathode side. A cathode is the electrode at which electrons go into a cell, tube or diode whether driven externally or internally. A reaction in the anode side causes the hydrogen to split into positive ions (H+) and negatively charged electrons (e-). A membrane allows only the positive ions to pass into the cathode side. The electrons travel along an external circuit to the cathode. This external circuit is what creates the electrical current that powers the engine. When the electrons and the positive ions combine with the oxygen it creates water.

Chemistry of a Fuel Cell
Anode side:
2H2 => 4H+ + 4e-
Cathode side:
O2 + 4H+ + 4e- => 2H2O
Net reaction:
2H2 + O2 => 2H2O

References:
  • Adams, Victor. "Fuel Cells." Physics World. July 1998. Physics Web. 4 November. 2005.
  • Ford, Royal. "Out Of Thin Air." Boston Globe 31 October. 2005, Business. ed. : E6+E4.
  • "Fuel Cell." Article. 15 November. 2005. Wikipedia. 4 November. 2005.
  • "Fuel Cells Basics, how they work." Basics. The Online Fuel Cell Information Resource. 4 November. 2005.

Hayabusa on Mission to Discover Origins of the Solar System

by Geoff Counihan (Class of '05)

The Japanese launched the Hayabusa spacecraft on May 9th 2003 from their Kagoshima launch site. The spacecraft was meant to rendezvous in mid summer 2005 with the near earth asteroid Itokawa and bring back samples to earth in June 2007. Since asteroids are left over material from the formation of the solar system, these samples hopefully will help us determine what elements comprised the early solar system.

Itokawa is a 600-meter long potato shaped asteroid, which named after Hideo Itokawa, an early Japanese rocket scientist. The goal of the mission is to land on the surface of the asteroid, bring back samples from the surface, and to parachute them into Woomera, Australia. This will let scientists study the minerals that compose the asteroid and better understand the early composition of the universe. In addition to this primary objective, the mission is also to prove the electric propulsion engines, an autonomous navigation system, the sample collection system, and re-entry equipment.

During Hayabusaís flight two of the three reaction wheels, and the solar panels were slightly damaged by solar flares. This delayed the arrival to the asteroid until September 2005. These problems have required the scientists to rely more on Hayabusaís chemical propellant thrusters to maintain control of the spacecraft. It has also caused the scientists to be much more concerned about fuel conservation.

Since September, Hayabusa has been circling a few miles from Itokawa and has been collecting data on the asteroid. The entire surface has been examined with cameras and x-ray spectrometers to map the terrain and identify mineral composition. It has also been taking detailed pictures to identify the best landing site. There are two possible locations both generally smooth ñ almost all the rest of the asteroid is covered in jagged rocks and rough terrain. These two potential sites are now being thoroughly examined to determine which is best for Hayabusa.

On November 4th the Hayabusa ran into trouble during its practice landing session in preparation for the planned landings on November 12th and 25th. Scientists shut down the practice mission due to unusual signals from the spacecraft and have not yet rescheduled a landing.

When Hayabusa does land on Itokawa, it will release an ultra small Micro/Nano Experimental Robot Vehicle for Asteroid, known as MINERVA. MINERVA is a hopping robot lander with three, color cameras. Two of which are for stereoscopic close ups, and one is for distance. An interesting fact about the design of this 1.3 lb. lander is that it will hop around on the asteroidís surface transmitting data back to the hovering spacecraft.

During the landing, the Hayabusa will shoot a marker into the landing area that will reflect light back to the spacecraft. Then it will use a laser ranging device to descend to the asteroidís surface. Once there it will quickly sample the surface, launch MINERVA, and take off.

The most difficult part of the mission is clearly landing on the asteroid and gathering samples. The communication lag from the asteroid to earth is 10 minutes, meaning that the spacecraft will have to make most of the important decisions alone, without direct control from the scientists on earth.

References:

Tuesday, November 15, 2005

Formation of Stars

by Sam Wass (Class of '05)

Stars are the fundamental components of space. While scientists have proved that stars are “self gravitating balls of mostly hydrogen gas that act as thermonuclear furnaces to convert hydrogen into heavier elements of the periodic table” (1, p1), they have been unable to formulate a strong model regarding star formation. For the much of the last 50 years, observation of the star forming process has been severely impeded by the fact that most stars form in dark clouds that optically block the process. Infrared and millimeter-wave technology has significantly expanded our knowledge how stars develop.

The dark clouds that make up the band which splits the Milky Way (when observed on clear sky) are the locations of star formation. These clouds are primarily made up of very cold hydrogen gas. The actual creation process occurs in the dense centers of these clouds. If an otherwise stable dense core is sufficiently unbalanced or loses internal support, the core’s dusty material collapses and thus starts the creation process. Any disturbances, including supernovae and spiral density waves have the potential to offset cloud equilibrium. Scientists have calculated that the molecular core must collapse to a size nearly ten orders of magnitude smaller than the original dimensions of the core in order to be dense enough to develop into a protostar, the first stage of star development. As particles contract closer to the core, their gravitational energies decrease. This results in an increase in temperature through the conversion of gravitational energy to thermal kinetic energy.

Once the protostar is born, the forming star enters the accretion phase of stellar evolution. During this time the star’s core slowly gains mass through the accretion of material being pulled into the core. Once the embryonic core achieves a mass of roughly .2 to .3 solar masses, the core’s temperature is sufficiently high enough to begin deuterium burning nuclear reactions. Once the core achieves a temperature of 10^6 Kelvins, hydrogen fusion commences and for a low mass star, the core commences the main sequence of stellar evolution.

The conditions that determine the main sequence evolution of a future star are the protostar’s mass, radius, and luminosity at the point where accretion stops. Once all of the material floating around a protostar is accreted, the protostar becomes visible to the telescope. The continued hydrogen burning within the star increases the star’s internal pressure to balance gravity. The star will begin a long phase of equilibrium along the main sequence of the stellar life cycle.

This discussion outlines the current model of star formation, but can only be applied to fairly unique star forming conditions. The model cannot represent stars that form in clusters or massive stars larger than eight solar masses. Solving these problems requires learning more about the mysterious formation of dark clouds and thus will require more critical observations and modeling. The study of star formation continues to be a major concentration of astronomical study. Fully understanding this complex phenomenon will hopefully expand our overall understanding of the formation of the universe.

Works Cited (active November 15, 2005):
  1. Star Formation in the Galaxy, An Observational Overview
  2. Formation of Stars and Planets
  3. Wikipedia: Star

New Orleans Levees and Floodwalls

by Dianne Kim (Class of '05)

The havok and destruction wreaked by hurricane Katrina brought into question the effectiveness of the levees put in place to combat such natural disasters. The levees were designed to block storm surges of category 3 storms and katrina entered New Orleans as a category 4 storm. Before redesigning a levee, engineers must consider this: Is the Mississippi Delta still capable of serving as a buffer that can absorb surges and rising ocean levels or has it been washed away beyond repair, necessitating a 300-mile wall to hold back the Gulf?

The hurricane also has a significant impact on nature and wildlife. Environmentalists had filed a suit at the U.S. District Court in New Orleans, claiming that bottomland hardwood wetlands must be spared if the Lousiana black bear is to survive and that the lands serve as breeding grounds for many species of birds in lower Mississippi. For years, environmentalists have condemned levees as artificial barriers to nature, “robbing the river of its ability to sustain itself by disrupting the natural flow and deposit of the Mississippi River’s sediments. The levees installed at the mouth of the MS river spared New Orleans but starved the wetlands of the sediment, nutrients and freshwater they need to survive.

Could the destruction of Katrina been prevented? Researchers say yes. Upon arriving at the scene one month after the hurricane left, they found more breaches throughtout the levee system than they had anticipated. The storm surge eroded sols from the base of the landward side of some levee section. If this wasn’t the scenario, water percolated under the sheet plings through layers of peat, sand and clay, and bubbled up on the other side. These levees were driven only 10 feet into the ground, whereas levees driven 25 feet into the ground kept the water at bay.

In 1998, scientists and engineers proposed a $14 billion dollar plan called Coast 2050, which outlined strategies to revive the delta and control flooding. It was rejected by Congress but considering the rate of wetland loss, land subsidence, seal-level rise and increasing frequency and severity of storms, the proposed should be on the table.

Coast 2050 proposed several different strategies: one such plan proposes connecting the barrier islands and outer marshes with levees, dams and floodgates. This would essentially create a continuous rim, or circle of safety, if you will, around the delta.

The Dutch goverment used this network but to prevent the loss of marshland, they erected an extensive series of sluices whose doors remain open yearround and close when storms approach.

Venice, the sinking city, will rely on mobile floodgates as well, but there’s a spin to it. The mobile floodgates will lie flat on the seabed under normal conditions and rise only during extremely high tides. All these efforts are put in place to appease environmentalists.

Engineers need a new vision to revive LA. The region produces one-third of the country’s seafood and wintering for 70% of the nation’s migratory waterfowl. And still, at least 25 square miles are receding every year. Katrina didn’t help either. The excess salt probably “baked” the soil, and is expected to kill off marsh grass.

So what are the lessons we need to draw from Katrina? Global-warming models show that sea level will climb 1-3 feet this century and increase the frequency and intensity of storms. Some may argue that we are currently going through a cycle but cycles run for 25-30 years and we are only 8 years into the current one. The Coastal Vulnerability Index, a fun-tuned formula made by the USGS, predicts how at risk an open coast-line is to high seas.

Though politicians are trying to save face and are in a scramble to act fast, a long-term plan is needed. Instead of working against nature, we need to work with it.

References:
  • Scientific American, November 2005 "Protecting Against the Next Katrina."
  • www. csmonitor.com, "Greens vs. Levees"
  • www.fas.org CRS Report for Congress, "Hurricane Damage Protection"

Picking: A Delicate and Furtive Pastime

by Connie Wang (Class of '05)

It's cold, it's late, and you're locked out of your house. What to do? If this weren't your house, a brick through a back window would do. However, because you don't want to be grounded for the rest of your high school life, you need to find a resourceful way of letting yourself in quietly without causing any property damage or waking up your parents.

Luckily, you happen to keep a set of professional lock picks in a spiffy black leather case in your back pocket. The dark light on your front porch prevents you from being able to see what you're doing, so you must rely on your well-honed instincts to intuitively guess what's going on inside the lock. The locks on front doors are generally of the deadbolt variety, and use a cylinder (or pin) lock. A typical pin tumbler is a cylinder within a cylinder, with one rotating inside the other. A full rotation of the inner cylinder, also called the plug, turns the cam, which in turn physically unbolts the lock. Without a key, however, the full rotation is blocked off by a series of pins placed in an uneven line around the circumference of the cylinder. There is a set of pins at every point of resistance. A set of pins consists of two components: the uppermost pins are all of the same size, whereas the pins at the bottom are of varying lengths. In the context of the plug, there are small shafts for each pin set, with a spring at the top to hold the pin sets in place- the pins connect the inner plug and outer cylinder.

You have two main tools: a pick and a tension wrench. Picks are long, slender bits of metal that are curved or bent at the end, and you use these to reach into the opening of the lock to push the pins up. The tension wrench can be thought of as a delicate flathead screwdriver. Think of the tension wrench as your key, only without the ability to push the pins the right way. Whenever the tension wrench is pushed up against a set of pins, you must, with your pick, gently force that one set of pins up above the shear line. This set of pins has been successfully "picked." The process continues until all of the uppermost portions of the pins are up above the shear line, allowing the plug to rotate freely and unbolt the door.

You begin by putting the thin head of the tension wrench into the opening of the lock. Apply pressure gently in the direction in which you would rotate the key, turning until you can feel resistance. Without reducing the pressure, use your other hand to hold the pick. Feel around for the first set of pins that is obstructing your path. When you find the set of pins, use the bent end of the pick to delicately force up the bottom pin. You know you've succeeded when the spring yields, and the uppermost pin is resting in the outer cylinder while the bottom pin is in the inner cylinder. Now you are free to turn the tension wrench even more in the same direction as previously determined (that is, the same direction as how you would turn the key). Continue turning the tension wrench until you encounter resistance. Perform the same steps a few more times until you are allowed to rotate the entire plug, unbolt the lock, and slip in unnoticed to your quiet house.

References:
1. http://www.lysator.liu.se/mit-guide/mit-guide.html
2. http://home.howstuffworks.com/lock-picking.htm

Sea Walls: Retention Comprehension

by Charlie Bartlett (Class of '05)

With so many people wanting to live near the water, it is no surprise that many attempts have been made to control the sea and rivers, and to increase inhabitable coastline. The most common technique in both residential and public construction is to build a seawall or a retaining wall. Seawalls come in many forms, but they all have the same basic purpose: to create more livable and safer areas near the coast. Retaining walls do the same along riverbanks, channels, etc. But do these attempts succeed? Can the forces of something as powerful as an ocean really be controlled by human construction?

A seawall is something that tries to slow the forces of erosion. In the short term, seawalls can be very successful at accomplishing this. If the material used to construct the seawall is durable enough, it can provide a shielding effect for the part of the beach behind the wall. It does this by absorbing and deflecting the forces of the waves, forces that could eat away at the shore.

However, the problem with a seawall arises in the long term. While the seawall does in fact deflect the force of a wave, the force is not lost. This force of a wave creates a backwash off of the wall. The backwash in turn, along with crosscurrents, up currents and other factors, creates what is known as erosion (erosion is also increased by such influences as wind, water level, etc.). This generates a problem when all of the sediment in front of the sea wall has been taken out to sea.

When the beach in front of the wall has been eroded, the property owner is in the most danger. At this point, the ocean will proceed to erode the seawall, and the wall will ultimately crumple. When this happens, not only is a smaller and more vulnerable section of the beach exposed, but also the pieces of the seawall will now aid in the erosion. The pieces of the wall will be moved and churned with the waves, and as a result, will cut away more at the sand underwater, and will quicken erosion.

A retaining wall is similar to a seawall, but it is usually meant to control a river instead of an ocean. A retaining wall does not have to deal with the effects of erosion to the extent of a seawall, as the water does not undercut the structure. However, the problem with a retaining wall is presented when the force and height of a river increase.

When the river height rises, there is the possibility of flooding, as the river could rise above the wall. When the force is increased, the wall could break, causing a similar problem. The only seemingly certain way to stop this threat would be to create a massive, extremely thick wall. This, however, is very impractical and very unsightly, and will probably not be instituted in most areas.
This is all well and good, but who cares? This is applicable in one way or another to many areas of the country, but it become immeasurably important to an area such as New Orleans. When an area such as this is below sea level, one the coast, and next to a major river, all of these factors come into play. Hopefully, the area will learn from its mistakes and correct the problem in the future.

References:

Stem Cells in Cancer Treatment

by Sharon Ron (Class of '05)

Chemotherapy, as a treatment for high-risk cancers, is most commonly coupled with a bone-marrow transplant. However, provocative new evidence shows that patients that undergo peripheral blood stem-cell transplantation opposed to bone-marrow transplantation experience fewer complications, speedier recoveries and have improved relapse and survival rates.

The treatment of chemotherapy kills off cancer cells, however in the process it also kills of hematopoietic stem-cells. These stem cells are essentially immature blood cells produced in the bone marrow and can develop either into red blood cells, white blood cells or platelets. These cells are essential to life so when they are killed off in large amounts during chemotherapy the patient is put at risk for life-threatening infections. This is why procedures such as stem-cell transplantation are vital.

While it is true that all that is needed for a patient is some sort of stem-cell transfusion, research is finding that the stem-cells in bone-marrow are not as effective in stowing off disease as those from peripheral blood. In a study done on small cell lung cancer (SCLC), 18 patients were treated by means of chemoradiotherapy and then received two cycles of treatment with chemotherapy drugs followed by an infusion of peripheral blood stem cells collected from the bloodstream. The typical symptoms seen after chemotherapy, such as higher toxicities, diarrhea and kidney problems arose infrequently. Another study led by Clinical Research Division found that out of 172 patients evaluated, the survival rate for cancer patients transplanted with stem-cells was 65% while the rate of survival for bone-marrow transplantation was 45%. “The results are exciting because most strategies aimed at reducing relapse are associated … with more complications and higher mortality,” says the scientist leading the study, Dr. William Bensinger. He suggests that the difference may lay in functional differences between marrow stem-cells and stem cells from peripheral blood. Bensinger speculates that stems-cells are more abundant in peripheral blood than in bone marrow, although not enough sufficient research has been done to prove this.

The stem-cell collection process might prove to be key in discovering what accounts for this substantial difference. Stem-cells from bone-marrow are removed via a large syringe, injected into the donor’s hip bone until the correct amount or marrow has been collected. However, the process of collection from peripheral blood is to stimulate stem-cell growth through drug injections and then after a few days to collect the blood. The process used separates and collects only the white blood cells because these cells contain all the stem-cells of the blood. Therefore, patients undergoing a transplant of peripheral blood stem-cells also receive a large amount of t-cells.

The patients treated with stem-cells from peripheral blood rather than bone-marrow are clinically proven to have quicker recoveries and less relapses, complications, and deaths. With evidence such as this it seems a change in the chemotherapy recovery process is imminent.

References:
Active (all) as of 11/03/05)

1-866-GRO-BONE

by Zoe Philip (Class of '05)

The Stryker Biotech division of Stryker Corporation has made a giant step forward in the regeneration of human tissue. Their studies are focused on the renewal of various human tissues that have been damaged through disease or injury. Currently, their lead drug on the market is a protein called Bone Morphogenic Protein 7 (BMP7). It is approved in thirty different countries including the European Union, the United States, Australia, Canada, and Switzerland. The presence of BMP growth factors, a molecule capable of stimulating the renewal of cells, was first hypothesized in the mid sixties by an orthopedic surgeon, Doctor Marshall Urist. By the seventies BMP7 had been discovered, and Stryker acquired the rights to study and manufacture the protein by the early nineties.

BMP7 is capable of stimulating the regeneration of bone. The protein can stimulate fractures to heal and can create bone fusions, in areas such as the spine. All it takes is a local injection (injection at the site of injury) of the protein. The protein then reacts with receptors on mesenchymal stem cells. These stem cells are present in adults as cells designated to manufacture more of a mesenchymal tissue, such as bone. The purpose of the protein is to attach to the ACTII receptors, which stimulate the Smad pathway. The Smad pathway is the pathway within the bone cells whose job it is to tell it to make more cells. In some people, fractures just will not heal because their stem cells are not being stimulated to produce more tissue. When BMP7 is injected into a person, it provides them with the necessary protein to trigger regeneration. BMP7 is just the first step towards discovering and manipulating others proteins that can regenerate other tissues besides bone.

Three components are needed to regenerate tissues: stem cells, a signal to stimulate the stem cells, and a matrix, a substance onto or into which the new cells can grow. The stem cells are already present in most tissues from birth. In the case of re-growing or creating bone, the signal for the cell to begin producing more tissue is the BMP7 protein. In order for the injection of BMP7 to work Stryker Biotech has created several different matrices. One of them is called Bovine Collagen and is used for healing fractured bones. Another, Tricalcium Phosphate, is used in spine fusions. Through the production of these materials, Stryker has made it possible for tissues to heal.

In recent pre-clinical trials, BMP7 has shown that it is capable of regenerating cartilage. If the studies show that the protein is capable of consistently producing new cartilage, then it could be used to cure Degenerative Disk Disease and Ostero-Arthritis, and heal sports injuries. Unfortunately the process of getting a drug to the market is time-consuming and expensive. A new drug must be proven to be safe, effective, and able to be manufactured consistently. On average it takes about 15 years and about 800 million dollars to bring a drug to market. Despite the time and the cost though, the efforts are worth it, bringing relief to patients in pain.

References:

Monday, November 14, 2005

Venus Express Takes Off

by James Cassettari (Class of '05)

November 8th 2005 marked the date in which the Venus Express satellite will be launched by the European Space Agency. It will photograph and analyze areas of the harsh, hot, dry planet in an orbit that will last over a year. This will help in answering the speculation of possible traces of life in the clouds of Venus.

The mission will be the first Venus exploration done by the European Space Agency (ESA). There is a fascination with Venus and it is even considered the Earth's Evil twin because its size and mass are extremely similar. This is interesting to scientists because it can help provide new insights into what happened that made the two environments so drastically different.

The Venus Express is a continuation of the Mars Express mission that the ESA had put together previously. One major exception is that they needed to account for the fact that Venus is about twice as close to the Sun as Mars. The satellite used in the Venus Express Mission is virtually the same except for accommodations that were made to increase its strength and resistance against higher amounts of heat that surrounds the Venusian area.

Years of planning have been involved with the project as it was started in 2001. Its testing finally came to a conclusion in September of 2005. With its final electrical testing, it was ready for shipment to its launch site: Baikonur Cosmodrome in Kazakhstan where it will be launched by a Russian Soyuz Fregat rocket. On November 5th 2005 it was moved to a launch pad, where it will take off for a very important mission for the ESA.

There was a minor setback and delay when the initial launch date was postponed due to a contamination of the satellite's very important insulation. Due to this inconvenience the launch date was changed from October 26th to November 8th. It will take the Venus Express an estimated 153 earth days to complete its trip from Earth to its orbit around Venus. When it finally does make it to the orbit it will take the equivalent of five earth days to complete a good orientation to the planet. It is going to spend 500 days gathering data in its orbit of the planet, but it will not be landing. There have been a few landings on Venus and unfortunately its harsh climate that includes temperatures of around 464 degrees C (864 degrees F) and acid rain have made landing missions difficult. The satellite will collect samples from the dense clouds and photograph different areas.

The photographs and samples that the Venus Express is collecting will be sent via the VaRa Venus Radio, that bounces data transmissions back to earth with radio waves. The photos taken will be of the highest quality from the wide angle Venus Monitoring Camera (VMC). The VMC is capable of capturing still and moving images, and is equipped with ultraviolet, thermal, and visible settings, which means it will take pictures that focus on different aspects on the planet, such as heat and light.

All in all the data that is recovered by the Venus Express will provide very detailed and important information regarding Venus' extremely complex and unforgiving atmosphere. It will be a very big acomplishment for the ESA and will provide valuable information to the world.

References:

Sunday, November 13, 2005

Recent Hurricanes Surged by Global Warming

by Ani Sanyal (Class of '05)

The recent increase in the number of hurricanes hitting US coasts have led to scientists wondering if there is a larger reason for this drastic change. In August, the US witnessed the wrath of Hurricane Katrina which devastated the New Orleans and Mississippi area. The unusual strength of Katrina prompted scientists to hypothesize that global warming and drastic climate changes are responsible for creating more powerful hurricanes. Hurricane numbers have steadily risen since 1995; with nine hurricanes in 2004 and even more predicted for 2005. As communities in the US begin to rebuild after the damage, scientists are looking to find the reasons for the change and ways to prevent such destruction in the future.

Hurricanes first take shape over warm bodies of water in tropical areas. As the warm air rises from the surface of the ocean, it condenses and forms small storm clouds. Slowly, heat is released from the cloud, allowing it to rise and allowing for more warm air to replace the released air. As this process continues, it forms a pattern of wind which concentrates its self in the middle of the hurricane. This wind propels the hurricane over land and water eventually crashing into a coastline. Scientists believe that the recent intensified nature of hurricanes can be explained by examining the effects of global warming.

The effect of global warming on hurricanes is evident: as the climate warms it allows for hurricanes to gain more strength right from inception. If the air rising into the hurricanes is warmer at the beginning, it will increase the wind speed and the amount of rainfall a hurricane is able to deliver. Scientists attribute the increase of carbon dioxide and greenhouse gases in the atmosphere as the cause of a new wave of global warming. This coupled with the changes in the climate due to El Nino and the North Atlantic Oscillation has led to a visible increase in storm strength. Another byproduct of global warming is an increase in atmospheric pressure, which is the main component in determining the rainfall of a hurricane. When all these factors are combined it creates a hurricane

This poses the question: What can be done about this situation? As the trend of increasing hurricane intensity becomes evident, it poses a problem for the government, insurance companies and residents along the affected coast. With the potential for damage higher than before, we must find a way to combat this growing threat. Though it is not possible to physically prevent a hurricane from happening, it is possible to predict the path of a hurricane and take the necessary measures to minimize damage. Scientists have begun developing a ‘power-dissipation index’ to gauge the intensity of a particular hurricane and prepare for the aftermath accordingly. The index gives a reading about the wind speed and life span of an approaching hurricane. In addition to this device, scientists should develop simulation programs which would enable them to predict the inception of a hurricane by factoring the current conditions. By gathering data on water temperatures, carbon dioxide levels, atmospheric pressure among other inputs, scientists should create a program would let them predict where a next monster hurricane might start. In terms of minimizing the damage, scientists are examining the possibility of increasing forestation along the coastal areas, creating additional channels for rivers to deposit more silt (e.g., cut channels for the Mississippi river) and reclaim additional land so that the damage caused by hurricanes are dissipated before it hits the coastal communities.

References:

Recent Hurricanes Not Surged by Global Warming

by Elizabeth Bauman (Class of '05)

Due to the increased frequency of hurricanes in the past several years, the media and the uninformed masses have searched for and finally stumbled upon the perfect scapegoat: global warning. Global warming, such people proclaim, is causing storms to grow in frequency and intensity. Such a cause is an easy target; global warming has been blamed for decades and is a constant environmental issue. It is also amendable, which leaves those distraught by weather phenomena feeling less helpless. While the theory that global warming is aggravating hurricanes is entertaining and seemingly logical, scientists continually refute this misconception.

The frequency of hurricanes, insist climate researchers, is due to natural variations in hurricane patterns; these variations are unaffected by global temperatures. These patterns undulate every twenty years or so. In the 1940s, The Atlantic Ocean experienced a similar outburst of hurricanes to this decade; in the 1970s, however, the storm front was rather quiet in comparison. Statistics show that hurricanes have actually declined in number throughout the last century, including the eruptions of the ’40’s and this decade. A sudden increase in the number of hurricanes, although this number is still smaller than the average number of hurricanes of the past one hundred years, does not render global warming responsible for this year or a century of steady decline.

The average intensity of the hurricanes has also decreased throughout the century, a notion that directly disproves proponents of the global warming theory. Global warming would mostly affect the Polar Regions. Since intensities of hurricanes are due to differences between temperatures between the tropics and the poles, global warming would actually lessen the vigor of the hurricanes as the Polar Regions became warmer.

Another argument of those who indict global warming as the cause for 2005’s hurricanes is that the increasing temperatures are raising the sea levels. The sea levels, however, are currently rising more slowly than the average rate of increase over the past eighteen thousand years. The constant rising of the sea levels is a natural characteristic of water between ice ages. Having risen three hundred feet over the eighteen thousand years period, the increase is thus unaffected by human intervention in nature and the release of carbon dioxide into the atmosphere.

In response to the plethora of questions in regards to this issue, climatologists try to steer the inquirers to a scientifically logical direction: “Rather than blaming global warming - for which there is little supporting meteorological evidence - emphasis on emergency preparedness and further storm research would be a constructive response.” The researchers recognize that emotional need to blame something has instigated these allegations against global warming.

While global warming has easily filled the void of responsibility for the events of this year, its incrimination is inaccurate, according to scientists. Although the natural patterns of weather are yet to be explained in a concrete manner, global warming might, over time, lessen the intensity and thus the fatal effects of hurricanes in the future. Global warming may be the cause for many environmental problems, but it is guilt-free of the so-called “increase” in hurricanes experienced in 2005.

References:

The New "Tango Array" Makes All the Difference

by Tazneena Ishaque (Class of '05)

With great teamwork Princeton scientists have invented a device that speedily sorts microscopic particles into extremely fine degrees of sizes. Until before the invention there was no way to sort large quantities of molecules or cells by size with such speed and precision. The Princeton invention can distinguish large quantities of particles "that are 1.00 micrometer from others that are 1.005 microns in a matter of seconds" ("Tiny Tango device speeds sorting micro particles"). The device has been nicknamed the "tango array" and it is viewed as a very important advancement.

The discovery was led by Lotien R. Huang, a postdoctoral researcher in electrical engineering, who worked together with James Sturm, professor of electrical engineering, Robert Austin, a professor of physics, and Edward Cox, a professor of molecular biology. In the past, they have produced a variety of devices for sorting particles, but none of them were as fast and precise as the tango array. Lotien R. Huang started this project when he was challenged by one of his colleagues to "come up with a mathematical description of how his earlier attempts at sorting devices worked: if he altered a device, could he predict exactly how its performance would change?" ("Tiny Tango device speeds sorting micro particles"). At first, Huang thought it would be impossible to come up with such a model, but within days Huang was not only able to obtain a mathematical theory, but also formulated an idea of making a new device that had absolutely no substitution between speed and accuracy. The device was very impressive as it was able to separate particles by size in minutes and the operation was also very simple.

The tango array has opened up a range of possible uses for the future. The device consists of an arrangement of microscopic pillars carved into silicon. Then air from a syringe forces a liquid suspension of particles through the pillar arrangement, which guides the particles into different paths. When the particles appear from the pillar, they have been sorted into channels according to size. The device works in an exceptional way because the arrangement of pillars forces particles along completely fixed paths unlike previous models which required particles to spread randomly.

This new invention will greatly help to speed up and expand many areas of biological research. It could largely replace many of the devices that are commonly used to separate cells and molecules by mass. "A primary use could be in sorting segments of DNA according to their length, which is a key step in genome sequencing efforts. Another use may be in distinguishing one type of virus from another, because many viruses have a unique size, slightly different from other viruses," said the researchers. ("Tiny Tango device speeds sorting micro particles") This truly is a big step for the scientific world and will help make much advancement.

References:

Nobel Prize Winner Roy J. Glauber

by Naris Ghazarian (Class of '05)

Roy J. Glauber, John L. Hall and Theodor W. Haensch are this years winners of the Nobel Prize for physics. The prize was awarded to the three for their contributions to the advancement of GPS technology. Roy J. Glauber, a Harvard University professor received half of the prize, while Hall and Haensch each were awarded a quarter.

Roy J. Glauber was born in New York City in 1925. He attended the Bronx High School of science, graduating in 1941. Glauber received both his Bachelors and Doctorate degree from Harvard, completing his education in 1949. Over the years he has been involved in many important studies and projects including staff work on the Manhattan project.

Glauber’s contributions to the world of Physics have been numerous. He has receive several awards including: the A. A. Michelson Medal from the Franklin Institute in Philadelphia, the Max Born Award from the American Optical and the A. von Humboldt Research Award. The pinnacle of his awards is the Nobel Prize in physics for his contributions to the quantum theory of optical coherence.

The quantum theory of optical coherence, as the name implies, is not proven. Glauber’s work has brought advancements to proving the theory. Applications of his results have proven to be successful, however, the theory has failed to reach a consensus among scientists. Further studies must be conducted before any conclusions can be drawn.

As a result of his studies, Glauber concluded “the photon absorption statistics from a laser cannot be described by any simple stochastic behavior, Gaussian or Poissonian, but require a detailed knowledge of the quantum state of the device” (Quantum-mechanical theory of optical coherence). His observations led to his studies involving the quantum theory of lasers, parametric amplifiers and photon correlation experiments. Glauber’s work is a step forward in attaining more precise measurements of atomic structure and frequencies. More accurate measurements will allow for “better GPS systems, better space navigation and improved control of astronomical telescope arrays.” (Quantum-mechanical theory of optical coherence)

References:

Darpa Grand Challenge

by Max Alsgaard-Miller (Class of '05)

Last month was the second running of the Defense Advanced Research Projects Agency (DARPA) Grand Challenge. The Grand Challenge is a race of autonomous ground robots that must complete a challenging 132 mile off-road course within ten hours. The hope of the Grand Challenge is that the private sector will develop new technology that the military can use without excessive research and development by the military. The first year the challenge was a surprising failure. The prize was one million dollars and the farthest distance was 7 miles. This year's competition, with the prize money raised to two million dollars, was much more exciting. Four robots crossed the finish line within the 10 hour time limit; all of which finished within a 35 minutes block. After all the times were complied it was determined that "Stanley", Stanford's VW Touareg, was the winner.

This year, the qualification was a more rigorous process where robots were placed in five different, very difficult situations, so only realistic competitors would be included in the actual event. These qualification rounds quickly showed where the teams had short comings. Forty robots participated in the qualification round and only twenty-three were selected for the race.

The course was a very challenging 132 miles though parts of California and Nevada. Many elements had to be taken into account when designing a car for the competition. The course was deliberately designed to thwart vehicles that simply wished to use GPS and drive though the course. Many large obstacles were placed throughout the course and sections of the course were in tunnels or on steep passes where the GPS signal weak or nonexistent. Therefore, along with GPS, many of the robots, including "Stanley" utilized laser guidance systems, radar, cameras and active suspension monitoring. The true challenge was getting all these systems work synergistically to predict the best path for the robot flawlessly. Another lesson learned by the teams was, that slow and steady wins the race. Many of the robots that did not complete the course used a strategy that required pushing the fifty MPH speed limit. Most of the robots in the challenge had some sort of error at one point or another during the race but the slower ones could recover from a slight over or under steer while the faster ones would be thrown much farther off course and in some places off of passes. This is clearly shown by the average speeds of the four robots with the best time, 18 MPH.

The development of a truly antonyms vehicle with reasonable speed over long distances is something that the Department of Defense has poured millions of dollars into. The DARPA Grand Challenge was an excellent enlistment of non-federal parties to do the research and development for the government. Now that it has been completed we will just have to wait and see what they come up with next.

References:

Thursday, November 10, 2005

Clozaril: The Gold Standard

by Adrienne Berkowitz (Class of '05)

Wakefield, MA. Sitting in a crowded room at the outpatient mental health clinic, 15 patients wait their turn for their bimonthly blood tests. A young woman, unfamiliar to the group enters the room. One of the patients interrupts his conversations and asks the woman if she is on Clozaril. The woman says no and is asked to leave the room. The 15 patients feel that their group is special, only those who are on Clozaril are allowed to attend their meetings. Clozaril has changed their lives and they have a special bond with other group members who have shared their positive experiences.

Riverside Counseling Center holds Clozaril Clinics every two weeks for people with a diagnosis of schizophrenia. This disorder affects approximately one in every 100 people. According to the National Institute of Mental Health (NIMH), approximately 200 million people will develop schizophrenia in their lifetime. Schizophrenia is a thought or psychotic disorder. However, psychosis may be present only for some periods during the course of the disease. Common characteristics of schizophrenia are delusions, hallucinations, disorganized speech, grossly disorganized or catatonic behavior, a limited ability to express emotions and feelings, a lack or interest or energy unaccompanied by depressed affect, and an inability to sustain concentration or attention.

Clozaril, the gold standard among antipsychotic medications for people with schizophrenia, was approved by the Federal Drug Administration (FDA) for general use in 1990. Clozaril is a tricyclic dibenzodiazepine derivative, made up of 8-chloro-11-(4-methyl-1-piperazinyl)-5H-dibenzo [b,e] [1,4] diazepine. The active ingredient is clozapine, a yellow, crystalline powder which is very slightly soluble in water. The inactive ingredients are colloidal silicon dioxide, lactose, magnesium stearate, povidone, starch (corn), and talc.

Clozaril is considered the gold standard because it is the most successful antipsychotic medication for people with schizophrenia who have not responded well to other medications. Clozaril is unique medication which treats positive symptoms such as hallucinations, delusions, and bizarre behavior and hostility. It also treats negative symptoms such as blunted emotions, lack of motivation and inability to experience pleasure. All the other antipsychotic medications are compared to Clozaril, giving the medication the name of the gold standard.

Even though it is considered the gold standard, Clozaril is only prescribed after other medications have been unsuccessful. Clozaril can deplete white blood cells, and requires a blood test every two weeks. If a patient’s white blood cell count gets too low, the patient has an increased risk of developing infections. Blood tests are necessary to detect the blood disorder at an early stage so that the patient can stop taking the drug immediately if necessary. Other problematic side effects may include drooling and bedwetting while asleep, feeling drowsy, and weight gain.

Carol Vanderlippe RNCS, LMHC, a nurse at Riverside, starts the Clozaril meeting by asking the patients how they feel. Many of the patients reply that they feel groggy and tired, a common side effect of the medication. However the benefits outweigh the problematic side effects. When Vanderlippe asks if the patients have noticed a difference with their new medication, Clozaril, the room is instantly filled with positive feedback. The patients tell stories about sleeping better, being able to perform everyday tasks, feeling less moody, being able to focus and enjoy life. Since 1990 when Clozaril was introduced, thousands of people have been able to maintain a high level of function. This Gold Standard drug will be around for a long time because the benefits outweigh the risks.

References:

Wednesday, November 09, 2005

Particle Accelerators

by Justin LeClair (Class of '05)

If you were ever wondering how to accelerate a proton at an extremely high energy and have it collide with another subatomic particle producing an anti proton I have just the device for you. Particle accelerators are devices usually used to accelerate charged particles in a vacuum. Today particle accelerators are some of the largest and most expensive instruments used by physicists. One might question the need to spend millions of dollars accelerating charged particles using huge electro magnetic fields. Despite the obvious answer (because its just sooo cool to fire a proton near the speed of light) particle accelerators are used to study the interactions between subatomic particles when they collide. That tells physicists a great deal about the laws governing the subatomic system. Furthermore it allows physicists to learn about unlocking vast amounts of power. You have all probably heard that if you could convert all of the energy contained in 1 kg of sugar, or 1 kg of water, or 1 kg of any other stuff, you could drive a car for about 100'000 years without stopping. This is possible because E=mc^2 tells us that mass is basically really concentrated energy. It is not very easy to convert matter into energy or we would power everything with it. The simplest way is to collide a proton with an anti proton (a proton with the opposite charge). But because all the antimatter got consumed milliseconds after the big bang it is very difficult to obtain. Through the use of a particle accelerator we can shoot protons at 300,000 km/sec (the speed of light) into matter creating temperatures exceeding 10'000'000'000'000 °C and converting the mass directly into energy. Unlocking a method of harnessing this energy could do untold things for the future of our species.

Another question you might ask yourself is; how does a particle accelerator work? To accelerate a particle to nearly the speed of light you need 3 things.
1) A source of particles
2) A tube with a partial vacuum
3) A way to accelerate the particles

You can get protons by stripping the electrons off hydrogen gas through ionization, whose atoms consist of one proton plus one electron. If you have heard anything about particle accelerators you probably have the notion that they require huge magnets. Magnets are not used to accelerate the particles, but to guide them. To accelerate the particles you use strong electric fields. Imagine a long track with a steel ball on it, you create an electromagnetic field 2 feet away from the ball. The ball rolls to the field, before it gets there you turn off the first field and turn on another one 2 more feet down. The ball continues past the first and towards the second, the whole time accelerating. You could continue this until the ball has reached your desired velocity. To do this fast enough you toggle the field’s sinusoidally so that the positive field is always just behind the proton and the negative field is just in front of it. It can take a long time to accelerate particles to nearly the speed of light. Some particle accelerators are 2 miles long. That is only to get an electron up to speed. To accelerate a proton one would need a far larger space. The easiest way to get around this (pun intended) is to have the tube form a circle. This way the particles can fly around the tube an infinite number of times. This method comes with many other issues. As the protons are guided around the circle, very powerful x-rays are given off which require intensive shielding, also you need to compensate for the loss of energy (think of it as friction from rubbing along the outer edge) that requires larger capacitors and thus more money.

References:

The Other, Other (Red) Meat

by Robbie Havdala (Class of '05)

Currently, cloning is used in mass production of crops like soybeans. Americans have, for the most part, been unknowingly eating these genetically cloned produces for ages. Pending the FDA’s approval, cloned animals could be coming to the dinner table as well.

Companies like ViaGen and Cyagra dominate the up-and-coming nascent food cloning industry. After a 2003 release by the FDA stating that food cloning is safe, farmers are on their toes waiting for the FDA approval for commercialization. Though no law prohibits doing so, ViaGen has voluntarily withheld its products until the FDA release. Nonetheless, many cloned animals are already living and waiting for the OK.

Genetic cloning has developed rapidly over the past few years. The process involves the extraction of a nucleus from an adult cell and implanting it within the embryo in the surrogate mother animal. Currently, the cost for the procedure is about $8,000 per animal, drastically cheaper than the price in 2003, $82,000 per animal. With the price rapidly decreasing, the market appears to be favorable to cloning

Cloned food opens the door for the meat industry. “Increased genetic merit (of clones) for increased food production, disease resistance, and reproductive efficiency,” are all factors that would lead to higher quality food, according to a National Research Council statement in 2004.

The future looks bright for ViaGen and Cyagra, as preliminary surveys found little to no risk at all in consumption of genetically cloned foods. Japan has had similar results as Americans, leading experts to expect an imminent approval from the FDA.

With the anticipated sanction, opponents of cloning are appearing weaker and weaker. Adversaries of the process note that cloned animals are still distinguishable from normal ones at birth. Even so, experts claim that abnormalities are subtle and have no effect upon human consumption.

The growing concern now has become convincing the public that genetically cloned food is safe. The National Milk Producers Federation (NMPF) has publicly refrained from supporting milk made by cloned cows, but they face similar concerns as the meat industry. ViaGen has insisted that the process is genetic duplication, not modification, claiming that the process is just like making twins--the life and characteristics of the animal are still determined by the environment it is brought up in.

Nonetheless, the public has been frightened by the “yuck” factor. According to a March survey by the International Food Information Council, 63 percent of consumers would not buy food from cloned animals, even if the FDA approves the process. Many believe that cloning of animals for consumption crosses the line. Analysts also fear foreign markets may reject American products if the market shifts completely to a market of genetically duplicated animals.

In the end, it all comes down to the consumers of America. As Mark Nelson of the Grocery Manufacturers of America put it, “We support the science. But out members are in the business of selling food to the public. If the public doesn’t want to eat Velveeta made from cloned milk, it ain’t gonna happen.”

References: