Showing posts with label Geography. Show all posts
Showing posts with label Geography. Show all posts

Sunscreen | Understanding and definition of Sunscreen | The use of sunscreen

Sunblock (also commonly known as sun screen, sun lotion, sun cream or block out) is a lotion, spray, gel or other topical product that absorbs or reflects some of the sun's ultraviolet (UV) radiation on the skin exposed to sunlight and thus helps protect against sunburn. Skin lightening products have sunscreen to protect lightened skin because light skin is more susceptible to sun damage than darker skin.

Sunscreens contain one or more of the following ingredients:

  1. Organic chemical compounds that absorb ultraviolet light.
  2. Inorganic particulates that reflect, scatter, and absorb UV light (such as titanium dioxide, zinc oxide, or a combination of both).
  3. Organic particulates that mostly absorb light like organic chemical compounds, but contain multiple chromophores, may reflect and scatter a fraction of light like inorganic particulates, and behave differently in formulations than organic chemical compounds. An example is Tinosorb M. Since the UV-attenuating efficacy depends strongly on particle size, the material is micronised to particle sizes below 200 nm. The mode of action of this photostable filter system is governed to about 90% by absorption and 10% by scattering of UV light.
Depending on the mode of action sunscreens can be classified into physical sunscreens (i.e. those which reflect the sunlight) or chemical sunscreens (i.e. those which absorb the UV light).

Medical organizations such as the American Cancer Society recommend the use of sunscreen because it prevents the squamous cell carcinoma and the basal cell carcinoma. However, the use of sunscreens is controversial for various reasons. Many sunscreens do not block UVA radiation, which does not cause sunburn but can increase the rate of melanoma, another kind of skin cancer, so people using sunscreens may be exposed to a high level UVA without realizing it.

It is often colloquially called sun tan lotion due to the product's similar name, although sun tan lotion is used to absorb UV rays rather than block them.

The dose used in FDA sunscreen testing is 2 mg/cm² of exposed skin. Provided one assumes an "average" adult build of height 5 ft 4 in (163 cm) and weight 150 lb (68 kg) with a 32 in (82 cm) waist, that adult wearing a bathing suit covering the groin area should apply 29 g (approximately 1 oz) evenly to the uncovered body area. Considering only the face, this translates to about 1/4 to 1/3 of a teaspoon for the average adult face. Larger individuals should scale these quantities accordingly.

Contrary to the common advice that sunscreen should be reapplied every 2–3 hours, some research has shown that the best protection is achieved by application 15–30 minutes before exposure, followed by one reapplication 15–30 minutes after the sun exposure begins. Further reapplication is only necessary after activities such as swimming, sweating, or rubbing/wiping.

However, more recent research at the University of California, Riverside, indicates that sunscreen needs to be reapplied within 2 hours in order to remain effective. Not reapplying could even cause more cell damage than not using sunscreen at all, due to the release of extra free radicals from those sunscreen chemicals which were absorbed into the skin. Some studies have shown that people commonly apply only 1/2 to 1/4 of the amount recommended to achieve the rated sun protection factor (SPF), and in consequence the effective SPF should be downgraded to a square or 4th root of the advertised value.

The first effective sunscreen may have been developed by chemist Franz Greiter in 1938. The product, called Gletscher Crème (Glacier Cream), subsequently became the basis for the company Piz Buin (named in honor of the place Greiter allegedly obtained the sunburn that inspired his concoction), which is still today a marketer of sunscreen products. It has been estimated that Gletscher Crème had a sun protection factor of 2.

The first widely used sunscreen was produced by Benjamin Green, an airman and later a pharmacist, in 1944. The product, Red Vet Pet (for red veterinary petrolatum), had limited effectiveness, working as a physical blocker of ultraviolet radiation. It was a disagreeable red, sticky substance similar to petroleum jelly. This product was developed during the height of World War II, when it was likely that the hazards of sun overexposure were becoming apparent to soldiers in the Pacific and to their families at home. Sales of this product boomed when Coppertone acquired the patent and marketed the substance under the Coppertone girl and Bain de Soleil branding in the early 1950s.

Franz Greiter is credited with introducing the concept of sun protection factor (SPF) in 1962, which has become a worldwide standard for measuring the effectiveness of sunscreen when applied at an even rate of 2 milligrams per square centimeter (mg/cm2). Some controversy exists over the usefulness of SPF measurements, especially whether the 2 mg/cm2 application rate is an accurate reflection of people’s actual use.

Newer sunscreens have been developed with the ability to withstand contact with water, heat and sweat.

As a defense against UV radiation, the amount of the brown pigment melanin in the skin increases when exposed to moderate (depending on skin type) levels of radiation; this is commonly known as a sun tan. The purpose of melanin is to absorb UV radiation and dissipate the energy as harmless heat, blocking the UV from damaging skin tissue. UVA gives a quick tan that lasts for days by oxidizing melanin that was already present and triggers the release of the melanin from melanocytes. UVB on the other hand yields a tan that takes roughly two days to develop because it stimulates the body to produce more melanin. The photochemical properties of melanin make it an excellent photoprotectant.

Sunscreen chemicals on the other hand cannot dissipate the energy of the excited state as efficiently as melanin and therefore the penetration of sunscreen ingredients into the lower layers of the skin increases the amount of free radicals and reactive oxygen species (ROS).

Some sunscreen lotions now include compounds such as titanium dioxide which helps protect against UVB rays. Other UVA blocking compounds found in sunscreen include zinc oxide and avobenzone. There are also naturally occurring compounds found in rainforest plants that have been known to protect the skin from UV radiation damage, such as the fern Phlebodium aureum.

Some sunscreen chemicals produce potentially harmful substances if they are illuminated while in contact with living cells. The amount of sunscreen which penetrates through the stratum corneum may or may not be large enough to cause damage. In one study of sunscreens, the authors write:

The question whether UV filters acts on or in the skin has so far not been fully answered. Despite the fact that an answer would be a key to improve formulations of sun protection products, many publications carefully avoid addressing this question.

George Zachariadis and E Sahanidou of the Laboratory of Analytical Chemistry, at Aristotle University, in Thessaloniki, Greece, have now carried out an ICP-AES analysis of several commercially available sunscreen creams and lotions. "The objective was the simultaneous determination of titanium and several minor, trace or toxic elements (aluminum, zinc, magnesium, iron, manganese, copper, chromium, lead, and bismuth) in the final products," the researchers say. They concluded that "Most of the commercial preparations that were studied showed generally good agreement to the ingredients listed on the product label." However, they also point out that the quantitative composition of the products tested cannot be assessed because the product labels usually do not provide a detailed break down of all ingredients and their concentrations. They also point out that, worryingly, their tests consistently revealed the presence of elements not cited in the product formulation, which emphasized the need for a standardized and official testing method for multi-element quality control of these products.

Some epidemiological studies indicate an increased risk of malignant melanoma for the sunscreen user. Despite these studies, no medical association has published recommendations to not use sunblock. Different meta-analysis publications have concluded that the evidence is not yet sufficient to claim a positive correlation between sunscreen use and malignant melanoma.

Adverse health effects may be associated with some synthetic compounds in sunscreens. In 2007 two studies by the CDC highlighted concerns about the sunscreen chemical oxybenzone (benzophenone-3). The first detected the chemicals in greater than 95% of 2000 Americans tested, while the second found that mothers with high levels of oxybenzone in their bodies were more likely to give birth to underweight baby girls.

The use of sunscreen also interferes with vitamin D production, leading to deficiency in Australia after a government campaign to increase sunscreen use. Doctors recommend spending small amounts of time in the sun without sun protection to ensure adequate production of vitamin D. When the UV index is greater than 3 (which occurs daily within the tropics and daily during the spring and summer seasons in temperate regions) adequate amounts of vitamin D3 can be made in the skin after only ten to fifteen minutes of sun exposure at least two times per week to the face, arms, hands, or back without sunscreen. With longer exposure to UVB rays, an equilibrium is achieved in the skin, and the vitamin simply degrades as fast as it is generated.

"There is evidence from isolated cell experiments that zinc oxide and titanium dioxide can induce free radical formation in the presence of light and that this may damage these cells (photo-mutagenicity with zinc oxide). However, this would only be of concern in people using sunscreens if the zinc oxide and titanium dioxide penetrated into viable skin cells. The weight of current evidence is that they remain on the surface of the skin and in the outer dead layer (stratum corneum) of the skin."

The principal ingredients in sunscreens are usually aromatic molecules conjugated with carbonyl groups. This general structure allows the molecule to absorb high-energy ultraviolet rays and release the energy as lower-energy rays, thereby preventing the skin-damaging ultraviolet rays from reaching the skin. So, upon exposure to UV light, most of the ingredients (with the notable exception of avobenzone) do not undergo significant chemical change, allowing these ingredients to retain the UV-absorbing potency without significant photodegradation. A chemical stabilizer is included in some sunscreens containing avobenzone to slow its breakdown - examples include formulations containing Helioplex and AvoTriplex. The stability of avobenzone can also be improved by bemotrizinol, octocrylene and various other photostabilisers.

Excessive exposure to direct sunlight is potentially harmful. Excessive exposure can result in sunburn if a person does not wear sun protective clothing or use suitable sunscreen. Products with a higher SPF (Sun Protection Factor) level provide greater protection against ultraviolet radiation. However in 1998, the Annual Meeting of the American Association for the Advancement of Science reported that some sunscreens advertising UVA and UVB protection do not provide adequate safety from UVA radiation and could give sun tanners a false sense of protection. A sunscreen should also be hypoallergenic and noncomedogenic so it doesn't cause a rash or clog the pores, which can cause acne.

For those who choose to tan, some dermatologists recommend the following preventative measures:
  1. Sunscreens should block both UVA and UVB rays. These are called broad-spectrum sunscreens, which should also be hypoallergenic and noncomedogenic so it doesn't cause a rash or clog the pores, which can cause acne.
  2. Sunscreens need to be applied thickly enough to get the full SPF protection.
  3. Sunscreens should be applied 15 to 30 minutes before exposure, followed by one reapplication 15 to 30 minutes after the sun exposure begins. Further reapplication is only necessary after activities such as swimming, sweating, and rubbing.
  4. Sun rays are strongest between 10 am and 4 pm. Sun rays are stronger at higher elevations (mountains) and lower latitudes (near the equator).
  5. Wearing a hat with a brim and anti-UV sunglasses can provide almost 100% protection against ultraviolet radiation entering the eyes.
  6. Reflective surfaces like snow and water can greatly increase the amount of UV radiation to which the skin is exposed.
Recent evidence indicates that caffeine and caffeine sodium benzoate increase UVB-induced apoptosis both in topical and oral applications. In mice, UVB-induced hyperplasia was greatly reduced with administration of these substances. Although studies in humans remain untested, caffeine and caffeine sodium benzoate may be novel inhibitors of skin cancer.

Meteorite | Understanding and definition of Meteorite | Collection of space rock | Fragments of Meteorite | The hunters Meteorite

A meteorite is a natural object originating in outer space that survives impact with the Earth's surface. Meteorites can be big or small. Most meteorites derive from small astronomical objects called meteoroids, but they are also sometimes produced by impacts of asteroids. When it enters the atmosphere, impact pressure causes the body to heat up and emit light, thus forming a fireball, also known as a meteor or shooting/falling star. The term bolide refers to either an extraterrestrial body that collides with the Earth, or to an exceptionally bright, fireball-like meteor regardless of whether it ultimately impacts the surface.

More generally, a meteorite on the surface of any celestial body is a natural object that has come from elsewhere in space. Meteorites have been found on the Moon and Mars.

Meteorites that are recovered after being observed as they transited the atmosphere or impacted the Earth are called falls. All other meteorites are known as finds. As of February 2010, there are approximately 1,086 witnessed falls having specimens in the world's collections. In contrast, there are over 38,660 well-documented meteorite finds.

Meteorites have traditionally been divided into three broad categories: stony meteorites are rocks, mainly composed of silicate minerals; iron meteorites are largely composed of metallic iron-nickel; and, stony-iron meteorites contain large amounts of both metallic and rocky material. Modern classification schemes divide meteorites into groups according to their structure, chemical and isotopic composition and mineralogy.

Meteorites are always named for the place where they were found, usually a nearby town or geographic feature. In cases where many meteorites were found in one place, the name may be followed by a number or letter (e.g., Allan Hills 84001 or Dimmitt (b)). Some meteorites have informal nicknames: the Sylacauga meteorite is sometimes called the "Hodges meteorite" after Ann Hodges, the woman who was struck by it; the Canyon Diablo meteorite, which formed Meteor Crater has dozens of these aliases. However, the single, official name designated by the Meteoritical Society is used by scientists, catalogers, and most collectors.

Most meteoroids disintegrate when entering Earth's atmosphere. However, an estimated 500 meteorites ranging in size from marbles to basketballs or larger do reach the surface each year; only 5 or 6 of these are typically recovered and made known to scientists. Few meteorites are large enough to create large impact craters. Instead, they typically arrive at the surface at their terminal velocity and, at most, create a small pit. Even so, falling meteorites have reportedly caused damage to property, and injuries to livestock and people.

Large meteoroids may strike the ground with a significant fraction of their cosmic velocity, leaving behind a hypervelocity impact crater. The kind of crater will depend on the size, composition, degree of fragmentation, and incoming angle of the impactor. The force of such collisions has the potential to cause widespread destruction. The most frequent hypervelocity cratering events on the Earth are caused by iron meteoroids, which are most easily able to transit the atmosphere intact. Examples of craters caused by iron meteoroids include Barringer Meteor Crater, Odessa Meteor Crater, Wabar craters, and Wolfe Creek crater; iron meteorites are found in association with all of these craters. In contrast, even relatively large stony or icy bodies like small comets or asteroids, up to millions of tons, are disrupted in the atmosphere, and do not make impact craters. Although such disruption events are uncommon, they can cause a considerable concussion to occur; the famed Tunguska event probably resulted from such an incident. Very large stony objects, hundreds of meters in diameter or more, weighing tens of millions of tons or more, can reach the surface and cause large craters, but are very rare. Such events are generally so energetic that the impactor is completely destroyed, leaving no meteorites. (The very first example of a stony meteorite found in association with a large impact crater, the Morokweng crater in South Africa, was reported in May 2006.)

Several phenomena are well documented during witnessed meteorite falls too small to produce hypervelocity craters. The fireball that occurs as the meteoroid passes through the atmosphere can appear to be very bright, rivaling the sun in intensity, although most are far dimmer and may not even be noticed during daytime. Various colors have been reported, including yellow, green and red. Flashes and bursts of light can occur as the object breaks up. Explosions, detonations, and rumblings are often heard during meteorite falls, which can be caused by sonic booms as well as shock waves resulting from major fragmentation events. These sounds can be heard over wide areas, up to many thousands of square km. Whistling and hissing sounds are also sometimes heard, but are poorly understood. Following passage of the fireball, it is not unusual for a dust trail to linger in the atmosphere for some time.

As meteoroids are heated during atmospheric entry, their surfaces melt and experience ablation. They can be sculpted into various shapes during this process, sometimes resulting in deep "thumb-print" like indentations on their surfaces called regmaglypts. If the meteoroid maintains a fixed orientation for some time, without tumbling, it may develop a conical "nose cone" or "heat shield" shape. As it decelerates, eventually the molten surface layer solidifies into a thin fusion crust, which on most meteorites is black (on some achondrites, the fusion crust may be very light colored). On stony meteorites, the heat-affected zone is at most a few mm deep; in iron meteorites, which are more thermally conductive, the structure of the metal may be affected by heat up to 1 cm below the surface. Meteorites are sometimes reported to be warm to the touch when they land, but they are never hot. Reports, however, vary greatly, with some meteorites being reported as "burning hot to the touch" upon landing, and others forming a frost upon their surface.

Meteoroids that experience disruption in the atmosphere may fall as meteorite showers, which can range from only a few up to thousands of separate individuals. The area over which a meteorite shower falls is known as its strewn field. Strewn fields are commonly elliptical in shape, with the major axis parallel to the direction of flight. In most cases, the largest meteorites in a shower are found farthest down-range in the strewn field.

Most meteorites are stony meteorites, classed as chondrites and achondrites. Only 6% of meteorites are iron meteorites or a blend of rock and metal, the stony-iron meteorites. Modern classification of meteorites is complex, the review paper of Krot et al. (2007) summarizes modern meteorite taxonomy.

About 86% of the meteorites that fall on Earth are chondrites, which are named for the small, round particles they contain. These particles, or chondrules, are composed mostly of silicate minerals that appear to have been melted while they were free-floating objects in space. Certain types of chondrites also contain small amounts of organic matter, including amino acids, and presolar grains. Chondrites are typically about 4.55 billion years old and are thought to represent material from the asteroid belt that never formed into large bodies. Like comets, chondritic asteroids are some of the oldest and most primitive materials in the solar system. Chondrites are often considered to be "the building blocks of the planets".

About 8% of the meteorites that fall on Earth are achondrites (meaning they do not contain chondrules), some of which are similar to terrestrial mafic igneous rocks. Most achondrites are also ancient rocks, and are thought to represent crustal material of asteroids. One large family of achondrites (the HED meteorites) may have originated on the asteroid 4 Vesta. Others derive from different asteroids. Two small groups of achondrites are special, as they are younger and do not appear to come from the asteroid belt. One of these groups comes from the Moon, and includes rocks similar to those brought back to Earth by Apollo and Luna programs. The other group is almost certainly from Mars and are the only materials from other planets ever recovered by man.

About 5% of meteorites that fall are iron meteorites with intergrowths of iron-nickel alloys, such as kamacite and taenite. Most iron meteorites are thought to come from the core of a number of asteroids that were once molten. As on Earth, the denser metal separated from silicate material and sank toward the center of the asteroid, forming a core. After the asteroid solidified, it broke up in a collision with another asteroid. Due to the low abundance of irons in collection areas such as Antarctica, where most of the meteoric material that has fallen can be recovered, it is possible that the actual percentage of iron-meteorite falls is lower than 5%.

Stony-iron meteorites constitute the remaining 1%. They are a mixture of iron-nickel metal and silicate minerals. One type, called pallasites, is thought to have originated in the boundary zone above the core regions where iron meteorites originated. The other major type of stony-iron meteorites is the mesosiderites.

Tektites (from Greek tektos, molten) are not themselves meteorites, but are rather natural glass objects up to a few centimeters in size which were formed—according to most scientists—by the impacts of large meteorites on Earth's surface. A few researchers have favored Tektites originating from the Moon as volcanic ejecta, but this theory has lost much of its support over the last few decades.

Most meteorite falls are recovered on the basis of eye-witness accounts of the fireball or the actual impact of the object on the ground, or both. Therefore, despite the fact that meteorites actually fall with virtually equal probability everywhere on Earth, verified meteorite falls tend to be concentrated in areas with high human population densities such as Europe, Japan, and northern India.

A small number of meteorite falls have been observed with automated cameras and recovered following calculation of the impact point. The first of these was the Příbram meteorite, which fell in Czechoslovakia (now the Czech Republic) in 1959. In this case, two cameras used to photograph meteors captured images of the fireball. The images were used both to determine the location of the stones on the ground and, more significantly, to calculate for the first time an accurate orbit for a recovered meteorite.

Following the Pribram fall, other nations established automated observing programs aimed at studying infalling meteorites. One of these was the Prairie Network, operated by the Smithsonian Astrophysical Observatory from 1963 to 1975 in the midwestern US. This program also observed a meteorite fall, the Lost City chondrite, allowing its recovery and a calculation of its orbit. Another program in Canada, the Meteorite Observation and Recovery Project, ran from 1971 to 1985. It too recovered a single meteorite, Innisfree, in 1977. Finally, observations by the European Fireball Network, a descendant of the original Czech program that recovered Pribram, led to the discovery and orbit calculations for the Neuschwanstein meteorite in 2002.

A few meteorites were found in Antarctica between 1912 and 1964. In 1969, the 10th Japanese Antarctic Research Expedition found nine meteorites on a blue ice field near the Yamato Mountains. With this discovery, came the realization that movement of ice sheets might act to concentrate meteorites in certain areas. After a dozen other specimens were found in the same place in 1973, a Japanese expedition was launched in 1974 dedicated to the search for meteorites. This team recovered nearly 700 meteorites.

Shortly thereafter, the United States began its own program to search for Antarctic meteorites, operating along the Transantarctic Mountains on the other side of the continent: the ANtarctic Search for METeorites (ANSMET) program. European teams, starting with a consortium called "EUROMET" in the late 1980s, and continuing with a program by the Italian Programma Nazionale di Ricerche in Antartide have also conducted systematic searches for Antarctic meteorites.

The Antarctic Scientific Exploration of China has conducted successful meteorite searches since 2000. A Korean program (KOREAMET) was launched in 2007 and has collected a few meteorites. The combined efforts of all of these expeditions have produced more than 23,000 classified meteorite specimens since 1974, with thousands more that have not yet been classified. For more information see the article by Harvey (2003).

At about the same time as meteorite concentrations were being discovered in the cold desert of Antarctica, collectors discovered that many meteorites could also be found in the hot deserts of Australia. Several dozen meteorites had already been found in the Nullarbor region of Western and South Australia. Systematic searches between about 1971 and the present recovered over 500 more, ~300 of which are currently well characterized. The meteorites can be found in this region because the land presents a flat, featureless, plain covered by limestone. In the extremely arid climate, there has been relatively little weathering or sedimentation on the surface for tens of thousands of years, allowing meteorites to accumulate without being buried or destroyed. The dark colored meteorites can then be recognized among the very different looking limestone pebbles and rocks.

In 1999, meteorite hunters discovered that the desert in southern and central Oman were also favorable for the collection of many specimens. The gravel plains in the Dhofar and Al Wusta regions of Oman, south of the sandy deserts of the Rub' al Khali, had yielded about 5,000 meteorites as of mid-2009. Included among these are a large number of lunar and Martian meteorites, making Oman a particularly important area both for scientists and collectors. Early expeditions to Oman were mainly done by commercial meteorite dealers, however international teams of Omani and European scientists have also now collected specimens.

The recovery of meteorites from Oman is currently prohibited by national law, but a number of international hunters continue to remove specimens now deemed "national treasures." This new law provoked a small international incident, as its implementation actually preceded any public notification of such a law, resulting in the prolonged imprisonment of a large group of meteorite hunters primarily from Russia, but whose party also consisted of members from the U.S. as well as several other European countries.

Beginning in the mid-1990s, amateur meteorite hunters began scouring the arid areas of the southwestern United States. To date, meteorites numbering possibly into the thousands have been recovered from the Mojave, Sonoran, Great Basin, and Chihuahuan Deserts, with many being recovered on dry lake beds. Significant finds include the Superior Valley 014 Acapulcoite, one of two of its type found within the United States as well as the Blue Eagle meteorite, the first Rumuruti-type chondrite yet found in the Americas. Perhaps the most notable find in recent years has been the Los Angeles meteorite, a martian meteorite that was reportedly found by Robert Verish. A number of finds from the American Southwest have yet to be formally submitted to the Meteorite Nomenclature Committee, as many finders think it is unwise to publicly state the coordinates of their discoveries for fear of confiscation by the federal government, and of 'poaching' by other hunters at known find sites. Several of the meteorites found recently are currently on display in the Griffith Observatory in Los Angeles.

In the 1970s a stone meteorite was uncovered during an archaeological dig at Danebury Iron Age hillfort, Danebury England. It was found deposited part way down in an Iron Age pit. Since it must have been deliberately placed there, this could indicate one of the first (known) human finds of a meteorite in Europe.

Some Native Americans treated meteorites as ceremonial objects. In 1915, a 135-pound iron meteorite was found in a Sinagua (c.1100-1200 AD) burial cyst near Camp Verde, Arizona, respectfully wrapped in a feather cloth. A small pallasite was found in a pottery jar in an old burial found at Pojoaque Pueblo, New Mexico. Nininger reports several other such instances, in the Southwest US and elsewhere, such as the discovery of Native American beads of meteoric iron found in Hopewell burial mounds, and the discovery of the Winona meteorite in a Native American stone-walled crypt.

Indigenous peoples often prized iron-nickel meteorites as an easy, if limited, source of iron metal. For example, the Inuit used chips of the Cape York meteorite to form cutting edges for tools and spear tips.

The German physicist, Ernst Florens Chladni, was the first to publish the then audacious idea that that meteorites were actually rocks from space. He published his booklet, "On the Origin of the Pallas Iron and Others Similar to it, and on Some Associated Natural Phenomena", in 1794. In this he compiled all available data on several meteorite finds and falls concluded that they must have their origins in outer space. The scientific community of the time responded with resistance and mockery. It took nearly 10 years before a general acceptance of the origin of meteorites was achieved through the work of the French scientist Jean-Baptiste Biot and the British chemist, Edward Howard. Biot's study, initiated by the French Academy of Sciences, was compelled by a meteorite fall of thousands of meteorites on April 26, 1803 from the skies of L'Aigle, France.

One of the leading theories for the cause of the Cretaceous–Tertiary extinction event that included the dinosaurs is a large meteorite impact. The Chicxulub Crater has been identified as the site of this impact. There has been a lively scientific debate as to whether other major extinctions, including the ones at the end of the Permian and Triassic periods might also have been the result of large impact events, but the evidence is much less compelling than for the end Cretaceous extinction.

There are several reported instances of falling meteorites having killed both people and livestock, but a few of these appear more credible than others. The most infamous reported fatality from a meteorite impact is that of an Egyptian dog that was killed in 1911, although this report is highly disputed. This particular meteorite fall was identified in the 1980s as Martian in origin. However, there is substantial evidence that the meteorite known as Valera hit and killed a cow upon impact, nearly dividing the animal in two, and similar unsubstantiated reports of a horse being struck and killed by a stone of the New Concord fall also abound. Throughout history, many first and second-hand reports of meteorites falling on and killing both humans and other animals abound, but none have been well documented.

The first known modern case of a human hit by a space rock occurred on 30 November 1954 in Sylacauga, Alabama. There a 4 kg stone chondrite crashed through a roof and hit Ann Hodges in her living room after it bounced off her radio. She was badly bruised. The Hodges meteorite, or Sylacauga meteorite, is currently on exhibit at the Alabama Museum of Natural History.

Other than the Sylacauga event, the most plausible of these claims was put forth by a young boy who stated that he had been hit by a small (~3 gram) stone of the Mbale meteorite fall from Uganda, and who stood to gain nothing from this assertion. The stone reportedly fell through a number of banana leaves before striking the boy on the head, causing little to no pain, as it was small enough to have been slowed by both friction with the atmosphere as well as that with banana leaves, before striking the boy. Although it is impossible to prove this claim either way, it seems as though he had little reason to lie about such an event occurring.

Several persons have since claimed to have been struck by "meteorites" but no verifiable meteorites have resulted.

Meteorite falls may also be the source of cultish worship. The cult in the Temple of Artemis (Diana) at Ephesus, one of the Seven Wonders of the Ancient World possibly originated with the observation of a meteorite fall which was understood by contemporaries to have fallen to the earth from Zeus, the principal Greek deity.

Gallium | Understanding and definition of Gallium | The function and usefulness of Gallium

Gallium is a chemical element that has the symbol Ga and atomic number 31. Elemental gallium does not occur in nature, but as the gallium(III) salt in trace amounts in bauxite and zinc ores. A soft silvery metallic poor metal, elemental gallium is a brittle solid at low temperatures. As it liquefies slightly above room temperature, it will melt in the hand. Its melting point is used as a temperature reference point, and from its discovery in 1875 to the semiconductor era, its primary uses were in high-temperature thermometric applications and in preparation of metal alloys with unusual properties of stability, or ease of melting; some being liquid at room temperature or below. The alloy Galinstan (68.5% Ga, 21.5% In, 10% Sn) has a melting point of about −19 °C (−2 °F).

In semiconductors, the major-use compound is gallium arsenide used in microwave circuitry and infrared applications. Gallium nitride and indium gallium nitride, minority semiconductor uses, produce blue and violet light-emitting diodes (LEDs) and diode lasers. Semiconductor use is now almost the entire (> 95%) world market for gallium, but new uses in alloys and fuel cells continue to be discovered.

Gallium is not known to be essential in biology, but because of the biological handling of gallium's primary ionic salt gallium(III) as though it were iron(III), the gallium ion localizes to and interacts with many processes in the body in which iron(III) is manipulated. As these processes include inflammation, which is a marker for many disease states, several gallium salts are used, or are in development, as both pharmaceuticals and radiopharmaceuticals in medicine.

Elemental gallium is not found in nature, but it is easily obtained by smelting. Very pure gallium metal has a brilliant silvery color and its solid metal fractures conchoidally like glass. Gallium metal expands by 3.1% when it solidifies, and therefore storage in either glass or metal containers is avoided, due to the possibility of container rupture with freezing. Gallium shares the higher-density liquid state with only a few materials like silicon, germanium, bismuth, antimony and water.

Gallium attacks most other metals by diffusing into their metal lattice. Gallium for example diffuses into the grain boundaries of Al/Zn alloys or steel, making them very brittle. Also, gallium metal easily alloys with many metals, and was used in small quantities as a plutonium-gallium alloy in the plutonium cores of the first and third nuclear bombs, to help stabilize the plutonium crystal structure.

The melting point of 302.9146 K (29.7646°C, 85.5763°F) is near room temperature. Gallium's melting point (mp) is one of the formal temperature reference points in the International Temperature Scale of 1990 (ITS-90) established by BIPM. The triple point of gallium of 302.9166 K (29.7666°C, 85.5799°F), is being used by NIST in preference to gallium's melting point.

Gallium is a metal that will melt in one's hand. This metal has a strong tendency to supercool below its melting point/freezing point. Seeding with a crystal helps to initiate freezing. Gallium is one of the metals (with caesium, rubidium, francium and mercury) which are liquid at or near normal room temperature, and can therefore be used in metal-in-glass high-temperature thermometers. It is also notable for having one of the largest liquid ranges for a metal, and (unlike mercury) for having a low vapor pressure at high temperatures. Unlike mercury, liquid gallium metal wets glass and skin, making it mechanically more difficult to handle (even though it is substantially less toxic and requires far fewer precautions). For this reason as well as the metal contamination problem and freezing-expansion problems noted above, samples of gallium metal are usually supplied in polyethylene packets within other containers.

Gallium does not crystallize in any of the simple crystal structures. The stable phase under normal conditions is orthorhombic with 8 atoms in the conventional unit cell. Each atom has only one nearest neighbor (at a distance of 244 pm) and six other neighbors within additional 39 pm. Many stable and metastable phases are found as function of temperature and pressure.

The bonding between the nearest neighbors is found to be of covalent character, hence Ga2 dimers are seen as the fundamental building blocks of the crystal. This explains the drop of the melting point compared to its neighbour elements aluminium and indium. The compound with arsenic, gallium arsenide is a semiconductor commonly used in light-emitting diodes.

High-purity gallium is dissolved slowly by mineral acids.

Gallium has no known biological role, although it has been observed to stimulate metabolism.

Gallium does not exist in free form in nature, and the few high-gallium minerals such as gallite (CuGaS2) are too rare to serve as a primary source of the element or its compounds. Its abundance in the Earth's crust is approximately 16.9 ppm. Gallium is found and extracted as a trace component in bauxite and to a small extent from sphalerite. The amount extracted from coal, diaspore and germanite in which gallium is also present is negligible. The United States Geological Survey (USGS) estimates gallium reserves to exceed 1 million tonnes, based on 50 ppm by weight concentration in known reserves of bauxite and zinc ores. Some flue dusts from burning coal have been shown to contain small quantities of gallium, typically less than 1% by weight.

The only two economic sources for gallium are as byproduct of aluminium and zinc production, while the sphalerite for zinc production is the minor source. Most gallium is extracted from the crude aluminium hydroxide solution of the Bayer process for producing alumina and aluminium. A mercury cell electrolysis and hydrolysis of the amalgam with sodium hydroxide leads to sodium gallate. Electrolysis then gives gallium metal. For semiconductor use, further purification is carried out using zone melting, or else single crystal extraction from a melt (Czochralski process). Purities of 99.9999% are routinely achieved and commercially widely available. An exact number for the world wide production is not available, but it is estimated that in 2007 the production of gallium was 184 tonnes with less than 100 tonnes from mining and the rest from scrap recycling.

The semiconductor applications are the main reason for the low-cost commercial availability of the extremely high-purity (99.9999+%) metal.

Gallium arsenide (GaAs) and gallium nitride (GaN) used in electronic components represented about 98% of the gallium consumption in the United States in 2007. About 66% of semiconductor gallium is used in the U.S. in integrated circuits (mostly gallium arsenide), such as the manufacture of ultra-high speed logic chips and MESFETs for low-noise microwave preamplifiers in cell phones. About 20% is used in optoelectronics. World wide gallium arsenide makes up 95% of the annual global gallium consumption.

Gallium arsenide is used in optoelectronics in a variety of infrared applications. Aluminium gallium arsenide (AlGaAs) is used in high-powered infrared laser diodes. As a component of the semiconductors indium gallium nitride and gallium nitride, gallium is used to produce blue and violet optoelectronic devices, mostly laser diodes and light-emitting diodes. For example, gallium nitride 405 nm diode lasers are used as a violet light source for higher-density compact disc data storage, in the Blu-ray Disc standard.

Gallium is used as a dopant for the production of solid-state devices such as transistors. However, worldwide the actual quantity used for this purpose is minute, since dopant levels are usually of the order of a few parts per million.

Multijunction photovoltaic cells, developed for satellite power applications, are made by molecular beam epitaxy or metalorganic vapour phase epitaxy of thin films of gallium arsenide, indium gallium phosphide or indium gallium arsenide.The Mars Exploration Rovers and several satellites use triple junction gallium arsenide on germanium cells. Gallium is the rarest component of new photovoltaic compounds (such as copper indium gallium selenium sulfide or Cu(In,Ga)(Se,S)2) for use in solar panels as a more efficient alternative to crystalline silicon.
  1. Magnesium gallate containing impurities (such as Mn2+), is beginning to be used in ultraviolet-activated phosphor powder.
  2. Neutrino detection. Possibly the largest amount of pure gallium ever collected in a single spot is the Gallium-Germanium Neutrino Telescope used by the SAGE experiment at the Baksan Neutrino Observatory in Russia. This detector contains 55–57 tonnes of liquid gallium. Another experiment was the GALLEX neutrino detector operated in the early 1990s in an Italian mountain tunnel. The detector contained 12.2 tons of watered gallium-71. Solar neutrinos caused a few atoms of Ga-71 to become radioactive Ge-71, which were detected. The solar neutrino flux deduced was found to have a deficit of 40% from theory. This was not explained until better solar neutrino detectors and theories were constructed (see SNO).
  3. As a liquid metal ion source for a focused ion beam.
  4. As alloying element in the magnetic shape memory alloy Ni-Mn-Ga.
  5. In a classic prank by scientists, who fashion gallium spoons and serve tea to unsuspecting guests. The spoons melt in the hot tea.
  6. As an additive in glide wax for skiis, and other low friction surface materials. US 5069803, Sugimura, Kentaro; Shoji Hasimoto & Takayuki Ono, "Use of a synthetic resin composition containing gallium particles in the glide surfacing material of skis and other applications", issued 1995
While not considered toxic, the data about gallium are inconclusive. Some sources suggest that it may cause dermatitis from prolonged exposure; other tests have not caused a positive reaction. Like most metals, finely divided gallium loses its luster and powdered gallium appears gray. Thus, when gallium is handled with bare hands, the extremely fine dispersion of liquid gallium droplets, which results from wetting skin with the metal, may appear as a gray skin stain.

Spiral galaxy | Understanding and definition of Spiral galaxy

A spiral galaxy is a certain kind of galaxy originally described by Edwin Hubble in his 1936 work The Realm of the Nebulae and, as such, forms part of the Hubble sequence. Spiral galaxies consist of a flat, rotating disk containing stars, gas and dust, and a central concentration of stars known as the bulge. These are surrounded by a much fainter halo of stars, many of which reside in globular clusters.

Spiral galaxies are named for the spiral structures that extend from the center into the disk. The spiral arms are sites of ongoing star formation and are brighter than the surrounding disk because of the young, hot OB stars that inhabit them.

Roughly two-thirds of all spirals are observed to have an additional component in the form of a bar-like structure, extending from the central bulge, at the ends of which the spiral arms begin. Our own Milky Way has recently (in the 1990s) been confirmed to be a barred spiral, although the bar itself is difficult to observe from our position within the Galactic disk. The most convincing evidence for its existence comes from a recent survey, performed by the Spitzer Space Telescope, of stars in the Galactic center.

Together with irregular galaxies, spiral galaxies make up approximately 60% of galaxies in the local Universe. They are mostly found in low-density regions and are rare in the centers of galaxy clusters.

Spiral galaxies consist of four distinct components:
  • A flat, rotating disc of (mostly newly created) stars and interstellar matter
  • A central stellar bulge of mainly older stars, which resembles an elliptical galaxy
  • A near-spherical halo of stars, including many in globular clusters
  • A supermassive black hole at the very center of the central bulge
The relative importance, in terms of mass, brightness and size, of the different components varies from galaxy to galaxy.

Spiral arms are regions of stars that extend from the center of spiral and barred spiral galaxies. These long, thin regions resemble a spiral and thus give spiral galaxies their name. Naturally, different classifications of spiral galaxies have distinct arm-structures. Sc and SBc galaxies, for instance, have very "loose" arms, whereas Sa and SBa galaxies have tightly wrapped arms (with reference to the Hubble sequence). Either way, spiral arms contain a great many young, blue stars (due to the high mass density and the high rate of star formation), which make the arms so remarkable.

Using the Hubble classification, the bulge of Sa galaxies is usually composed of population II stars, that is old, red stars with low metal content. Further, the bulge of Sa and SBa galaxies tends to be large. In contrast, the bulges of Sc and SBc galaxies are a great deal lesser, and are composed of young, blue, Population I stars. Some bulges have similar properties to those of elliptical galaxies (scaled down to lower mass and luminosity), and others simply appear as higher density centers of disks, with properties similar to disk galaxies.

Many bulges are thought to host a supermassive black hole at their center. Such black holes have never been directly observed, but many indirect proofs exist. In our own galaxy, for instance, the object called Sagittarius A* is believed to be a supermassive black hole. There is a tight correlation between the mass of the black hole and the velocity dispersion of the stars in the bulge, the M-sigma relation.

The bulk of the stars in a spiral galaxy are located either close to a single plane (the Galactic plane) in more or less conventional circular orbits around the center of the galaxy (the galactic centre), or in a spheroidal galactic bulge around the galactic core.

However, some stars inhabit a spheroidal halo or galactic spheroid. The orbital behaviour of these stars is disputed, but they may describe retrograde and/or highly inclined orbits, or not move in regular orbits at all. Halo stars may be acquired from small galaxies which fall into and merge with the spiral galaxy—for example, the Sagittarius Dwarf Elliptical Galaxy is in the process of merging with the Milky Way and observations show that some stars in the halo of the Milky Way have been acquired from it.

The pioneer of studies of the rotation of the Galaxy and the formation of the spiral arms was Bertil Lindblad in 1925. He realized that the idea of stars arranged permanently in a spiral shape was untenable due to the "winding dilemma". Since the angular speed of rotation of the galactic disk varies with distance from the centre of the galaxy (via a standard solar system type of gravitational model), a radial arm (like a spoke) would quickly become curved as the galaxy rotates. The arm would, after a few galactic rotations, become increasingly curved and wind around the galaxy ever tighter. This is called the winding problem. Measurements in the late 1960s showed that the orbital velocity of stars in spiral galaxies with respect to their distance from the galactic center is indeed higher than expected from Newtonian dynamics but still cannot explain the stability of the spiral structure.

Since the 1960s, there have been two leading hypotheses or models for the spiral structures of galaxies:
  • Star formation caused by density waves in the galactic disk of the galaxy.
  • The SSPSF model – Star formation caused by shock waves in the interstellar medium.
These different hypotheses do not have to be mutually exclusive, as they may explain different types of spiral arms.

Bertil Lindblad proposed that the arms represent regions of enhanced density (density waves) that rotate more slowly than the galaxy’s stars and gas. As gas enters a density wave, it gets squeezed and makes new stars, some of which are short-lived blue stars that light the arms.

This idea was developed into density wave theory by C. C. Lin and Frank Shu in 1964. They suggested that the spiral arms were manifestations of spiral density waves, attempting to explain the large-scale structure of spirals in terms of a small-amplitude wave propagating with fixed angular velocity, that revolves around the galaxy at a speed different from that of the galaxy's gas and stars.

The arms appear brighter because there are more young stars (hence more massive, bright stars). These massive, bright stars also die out quickly, which would leave just the darker background stellar distribution behind the waves, hence making the waves visible.

While stars, therefore, do not remain forever in the position that we now see them in, they also do not follow the arms. The arms simply appear to pass through the stars as the stars travel in their orbits.

Recent results suggest that the orientation of the spin axis of spiral galaxies is not a chance result, but instead they are preferentially aligned along the surface of cosmic voids. That is, spiral galaxies tend to be oriented at a high angle of inclination relative to the large-scale structure of the surroundings. They have been described as lining up like "beads on a string," with their axis of rotation following the filaments around the edges of the voids.

In April 2011 a presentation to the Royal Astronomical Society's April 2011 National Astronomy Meeting in Llandudno, Wales by postgraduate student Robert Grand, suggested that the stars motion of stars within a galaxy could form spiral arms which are continuous. Rather than stars moving in and out of static arms, Grand's simulations suggested that the arms themselves are transient features, with some arms breaking up and new ones being formed over periods of 80 to 100 million years. This pattern of arm formation and destruction has not been observed in real galaxies.

In a recent paper published in Proc. Roy. Soc. A, Charles Francis and Erik Anderson showed from observations of motions of over 20 000 local stars (within 300 parsecs), that, contrary to density wave theory, stars do move along spiral arms, and described how mutual gravity between stars causes orbits to align on logarithmic spirals. When the theory is applied to gas, collisions between gas clouds generate the molecular clouds in which new stars form, and evolution towards grand-design bisymmetric spirals is explained.

Biotechnology | Understanding and definition of Biotechnology

Biotechnology is a field of applied biology that involves the use of living organisms and bioprocesses in engineering, technology, medicine and other fields requiring bioproducts. Biotechnology also utilizes these products for manufacturing purpose. Modern use of similar terms includes genetic engineering as well as cell- and tissue culture technologies. The concept encompasses a wide range of procedures (and history) for modifying living organisms according to human purposes — going back to domestication of animals, cultivation of plants, and "improvements" to these through breeding programs that employ artificial selection and hybridization. By comparison to biotechnology, bioengineering is generally thought of as a related field with its emphasis more on higher systems approaches (not necessarily altering or using biological materials directly) for interfacing with and utilizing living things. The United Nations Convention on Biological Diversity defines biotechnology as:

"Any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use."

In other term "Application of scientific and technical advances in life science to develop commercial products" is biotechnology.

Biotechnology draws on the pure biological sciences (genetics, microbiology, animal cell culture, molecular biology, biochemistry, embryology, cell biology) and in many instances is also dependent on knowledge and methods from outside the sphere of biology (chemical engineering, bioprocess engineering, information technology, biorobotics). Conversely, modern biological sciences (including even concepts such as molecular ecology) are intimately entwined and dependent on the methods developed through biotechnology and what is commonly thought of as the life sciences industry.

For thousands of years, humans have used selective breeding to improve production of crops and livestock to use them for food. In selective breeding, organisms with desirable characteristics are mated to produce offspring with the same characteristics. For example, this technique was used with corn to produce the largest and sweetest crops.

In the early twentieth century scientists gained a greater understanding of microbiology and explored ways of manufacturing specific products. In 1917, Chaim Weizmann first used a pure microbiological culture in an industrial process, that of manufacturing corn starch using Clostridium acetobutylicum, to produce acetone, which the United Kingdom desperately needed to manufacture explosives during World War I.

Biotechnology has also led to the development of antibiotics. In 1928, Alexander Fleming discovered the mold Penicillium. His work led to the purification of the antibiotic by Howard Florey, Ernst Boris Chain and Norman Heatley penicillin. In 1940, penicillin became available for medicinal use to treat bacterial infections in humans.

The field of modern biotechnology is thought to have largely begun on June 16, 1980, when the United States Supreme Court ruled that a genetically modified microorganism could be patented in the case of Diamond v. Chakrabarty. Indian-born Ananda Chakrabarty, working for General Electric, had developed a bacterium (derived from the Pseudomonas genus) capable of breaking down crude oil, which he proposed to use in treating oil spills.

Revenue in the industry is expected to grow by 12.9% in 2008. Another factor influencing the biotechnology sector's success is improved intellectual property rights legislation—and enforcement—worldwide, as well as strengthened demand for medical and pharmaceutical products to cope with an ageing, and ailing, U.S. population.

Rising demand for biofuels is expected to be good news for the biotechnology sector, with the Department of Energy estimating ethanol usage could reduce U.S. petroleum-derived fuel consumption by up to 30% by 2030. The biotechnology sector has allowed the U.S. farming industry to rapidly increase its supply of corn and soybeans—the main inputs into biofuels—by developing genetically modified seeds which are resistant to pests and drought. By boosting farm productivity, biotechnology plays a crucial role in ensuring that biofuel production targets are met.

A series of derived terms have been coined to identify several branches of biotechnology; for example:
  • Bioinformatics is an interdisciplinary field which addresses biological problems using computational techniques, and makes the rapid organization and analysis of biological data possible. The field may also be referred to as computational biology, and can be defined as, "conceptualizing biology in terms of molecules and then applying informatics techniques to understand and organize the information associated with these molecules, on a large scale." Bioinformatics plays a key role in various areas, such as functional genomics, structural genomics, and proteomics, and forms a key component in the biotechnology and pharmaceutical sector.
  • Blue biotechnology is a term that has been used to describe the marine and aquatic applications of biotechnology, but its use is relatively rare.
  • Green biotechnology is biotechnology applied to agricultural processes. An example would be the selection and domestication of plants via micropropagation. Another example is the designing of transgenic plants to grow under specific environments in the presence (or absence) of chemicals. One hope is that green biotechnology might produce more environmentally friendly solutions than traditional industrial agriculture. An example of this is the engineering of a plant to express a pesticide, thereby ending the need of external application of pesticides. An example of this would be Bt corn. Whether or not green biotechnology products such as this are ultimately more environmentally friendly is a topic of considerable debate.
  • Red biotechnology is applied to medical processes. Some examples are the designing of organisms to produce antibiotics, and the engineering of genetic cures through genetic manipulation.
  • White biotechnology, also known as industrial biotechnology, is biotechnology applied to industrial processes. An example is the designing of an organism to produce a useful chemical. Another example is the using of enzymes as industrial catalysts to either produce valuable chemicals or destroy hazardous/polluting chemicals. White biotechnology tends to consume less in resources than traditional processes used to produce industrial goods. The investment and economic output of all of these types of applied biotechnologies is termed as bioeconomy.
Biotechnological engineering or biological engineering is a branch of engineering that focuses on biotechnologies and biological science. It includes different disciplines such as biochemical engineering, biomedical engineering, bio-process engineering, biosystem engineering and so on. Because of the novelty of the field, the definition of a bioengineer is still undefined. However, in general it is an integrated approach of fundamental biological sciences and traditional engineering principles.

Biotechnologists are often employed to scale up bio processes from the laboratory scale to the manufacturing scale. Moreover, as with most engineers, they often deal with management, economic and legal issues. Since patents and regulation (e.g., U.S. Food and Drug Administration regulation in the U.S.) are very important issues for biotech enterprises, bioengineers are often required to have knowledge related to these issues.

The increasing number of biotech enterprises is likely to create a need for bioengineers in the years to come. Many universities throughout the world are now providing programs in bioengineering and biotechnology (as independent programs or specialty programs within more established engineering fields).

The National Institute of Health was the first federal agency to assume regulatory responsibility in the United States. The Recombinant DNA Advisory Committee of the NIH published guidelines for working with recombinant DNA and recombinant organisms in the laboratory. Nowadays, the agencies that are responsible for the biotechnology regulation are: US Department of Agriculture (USDA) that regulates plant pests and medical preparation from living organisms, Environmental Protection Agency (EPA) that regulates pesticides and herbicides, and the Food and Drug Administration (FDA) which ensures that the food and drug products are safe and effective

Chemical formula | History and definition Chemical formula

A chemical formula or molecular formula is a way of expressing information about the atoms that constitute a particular chemical compound.

The chemical formula identifies each constituent element by its chemical symbol and indicates the number of atoms of each element found in each discrete molecule of that compound. If a molecule contains more than one atom of a particular element, this quantity is indicated using a subscript after the chemical symbol (although 18th-century books often used superscripts) and also can be combined by more chemical elements. For example, methane, a small molecule consisting of one carbon atom and four hydrogen atoms, has the chemical formula CH4. The sugar molecule glucose has six carbon atoms, twelve hydrogen atoms and six oxygen atoms, so its chemical formula is C6H12O6.
Chemical formulas may be used in chemical equations to describe chemical reactions. For ionic compounds and other non-molecular substances an empirical formula may be used, in which the subscripts indicate the ratio of the elements.

The 19th-century Swedish chemist Jöns Jacob Berzelius worked out this system for writing chemical formulas.

The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula can be useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule.

A chemical formula supplies information about the types and spatial arrangement of bonds in the chemical, though it does not necessarily specify the exact isomer. For example ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as CH3CH3. In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: CH2CH2, and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write H2C=CH2 or less commonly H2C::CH2. The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them.

A triple bond may be expressed with three lines or pairs of dots, and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond.

Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example isobutane may be written (CH3)3CH. This semi-structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula (CH3)3CH implies a central carbon atom attached to one hydrogen atom and three CH3 groups. The same number of atoms of each element (10 hydrogens and 4 carbons, or C4H10) may be used to make a straight chain molecule, butane: CH3CH2CH2CH3.

The alkene but-2-ene has two isomers which the chemical formula CH3CH=CHCH3 does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond (cis or Z) or on the opposite sides from each other (trans or E).

For polymers, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as CH3(CH2)50CH3, is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter n may be used to indicate this formula: CH3(CH2)nCH3.

For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example Na+, or Cu2+. The total charge on a charged molecule or a polyatomic ion may also be shown in this way. For example: H3O+ or SO42−.

For more complex ions, brackets  are often used to enclose the ionic formula, as in [B12H12]2−, which is found in compounds such as Cs2[B12H12]. Parentheses  can be nested inside brackets to indicate a repeating unit, as in [Co(NH3)6]3+. Here (NH3)6 indicates that the ion contains six NH3 groups, and encloses the entire formula of the ion with charge +3.

Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a left-hand superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is 32PO43-. Also a study involving stable isotope ratios might include the molecule 18O16O.

A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, 8O2 for dioxygen, and 168O2 for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly.

In chemistry, the empirical formula of a chemical is a simple expression of the relative number of each type of atom or ratio of the elements in the compound. Empirical formulas are the standard for ionic compounds, such as CaCl2, and for macromolecules, such as SiO2. An empirical formula makes no reference to isomerism, structure, or absolute number of atoms. The term empirical refers to the process of elemental analysis, a technique of analytical chemistry used to determine the relative percent composition of a pure chemical substance by element.

For example hexane has a molecular formula of C6H14, or structurally CH3CH2CH2CH2CH2CH3, implying that it has a chain structure of 6 carbon atoms, and 14 hydrogen atoms. However, the empirical formula for hexane is C3H7. Likewise the empirical formula for hydrogen peroxide, H2O2, is simply HO expressing the 1:1 ratio of component elements. Formaldehyde and acetic acid have the same empirical formula, CH2O. This is the actual chemical formula for formaldehyde, but acetic acid has double the number of atoms.

Chemical formula used for a series of compounds that differ from each other by a constant unit is called general formula. Such a series is called the homologous series, while its members are called homologs.

For example alcohols : CnH(2n + 1)OH (n ≥ 1)

The Hill system is a system of writing chemical formulas such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically. This deterministic system enables straightforward sorting and searching of compounds.

Plate tectonics | Understanding and definition of Plate tectonics

Plate tectonics (from the Late Latin tectonicus, from the Greek: τεκτονικός "pertaining to building") is a scientific theory which describes the large scale motions of Earth's lithosphere. The theory builds on the older concepts of continental drift, developed during the first decades of the 20th century (one of the most famous advocates was Alfred Wegener), and was accepted by the majority of the geoscientific community when the concepts of seafloor spreading were developed in the late 1950s and early 1960s. The lithosphere is broken up into what are called "tectonic plates". In the case of the Earth, there are currently seven to eight major (depending on how they are defined) and many minor plates. The lithospheric plates ride on the asthenosphere. These plates move in relation to one another at one of three types of plate boundaries: convergent, or collisional boundaries; divergent boundaries, also called spreading centers; and conservative transform boundaries. Earthquakes, volcanic activity, mountain-building, and oceanic trench formation occur along these plate boundaries. The lateral relative movement of the plates varies, though it is typically 0–100 mm annually.

The tectonic plates are composed of two types of lithosphere: thicker continental and thin oceanic. The upper part is called the crust, again of two types (continental and oceanic). This means that a plate can be of one type, or of both types. One of the main points the theory proposes is that the amount of surface of the (continental and oceanic) plates that disappear in the mantle along the convergent boundaries by subduction is more or less in equilibrium with the new (oceanic) crust that is formed along the divergent margins by seafloor spreading. This is also referred to as the "conveyor belt" principle. In this way, the total surface of the globe remains the same. This is in contrast with earlier theories advocated before the Plate Tectonics "paradigm", as it is sometimes called, became the main scientific model, theories that proposed gradual shrinking (contraction) or gradual expansion of the globe, and that still exist in science as alternative models.

Regarding the driving mechanism of the plates various models co-exist: Tectonic plates are able to move because the Earth's lithosphere has a higher strength and lower density than the underlying asthenosphere. Lateral density variations in the mantle result in convection. Their movement is thought to be driven by a combination of the motion of seafloor away from the spreading ridge (due to variations in topography and density of the crust that result in differences in gravitational forces) and drag, downward suction, at the subduction zones. A different explanation lies in different forces generated by the rotation of the globe and tidal forces of the Sun and the Moon. The relative importance of each of these factors is unclear, and is still subject to debate (see also below).

The outer layers of the Earth are divided into lithosphere and asthenosphere. This is based on differences in mechanical properties and in the method for the transfer of heat. Mechanically, the lithosphere is cooler and more rigid, while the asthenosphere is hotter and flows more easily. In terms of heat transfer, the lithosphere loses heat by conduction whereas the asthenosphere also transfers heat by convection and has a nearly adiabatic temperature gradient. This division should not be confused with the chemical subdivision of these same layers into the mantle (comprising both the asthenosphere and the mantle portion of the lithosphere) and the crust: a given piece of mantle may be part of the lithosphere or the asthenosphere at different times, depending on its temperature and pressure.

The key principle of plate tectonics is that the lithosphere exists as separate and distinct tectonic plates, which ride on the fluid-like (visco-elastic solid) asthenosphere. Plate motions range up to a typical 10–40 mm/a (Mid-Atlantic Ridge; about as fast as fingernails grow), to about 160 mm/a (Nazca Plate; about as fast as hair grows). The driving mechanism behind this movement is described separately below.

Tectonic lithosphere plates consist of lithospheric mantle overlain by either or both of two types of crustal material: oceanic crust (in older texts called sima from silicon and magnesium) and continental crust (sial from silicon and aluminium). Average oceanic lithosphere is typically 100 km thick; its thickness is a function of its age: as time passes, it conductively cools and becomes thicker. Because it is formed at mid-ocean ridges and spreads outwards, its thickness is therefore a function of its distance from the mid-ocean ridge where it was formed. For a typical distance oceanic lithosphere must travel before being subducted, the thickness varies from about 6 km thick at mid-ocean ridges to greater than 100 km at subduction zones; for shorter or longer distances, the subduction zone (and therefore also the mean) thickness becomes smaller or larger, respectively. Continental lithosphere is typically ~200 km thick, though this also varies considerably between basins, mountain ranges, and stable cratonic interiors of continents. The two types of crust also differ in thickness, with continental crust being considerably thicker than oceanic (35 km vs. 6 km).

The location where two plates meet is called a plate boundary, and plate boundaries are commonly associated with geological events such as earthquakes and the creation of topographic features such as mountains, volcanoes, mid-ocean ridges, and oceanic trenches. The majority of the world's active volcanoes occur along plate boundaries, with the Pacific Plate's Ring of Fire being most active and most widely known. These boundaries are discussed in further detail below. Some volcanoes occur in the interiors of plates, and these have been variously attributed to internal plate deformation and to mantle plumes.

Basically, three types of plate boundaries exist, with a fourth, mixed type, characterized by the way the plates move relative to each other. They are associated with different types of surface phenomena. The different types of plate boundaries are:
* Transform boundaries (Conservative) occur where plates slide or, perhaps more accurately, grind past each other along transform faults. The relative motion of the two plates is either sinistral (left side toward the observer) or dextral (right side toward the observer). The San Andreas Fault in California is an example of a transform boundary exhibiting dextral motion.
* Divergent boundaries (Constructive) occur where two plates slide apart from each other. Mid-ocean ridges (e.g., Mid-Atlantic Ridge) and active zones of rifting (such as Africa's Great Rift Valley) are both examples of divergent boundaries.
* Convergent boundaries (Destructive) (or active margins) occur where two plates slide towards each other commonly forming either a subduction zone (if one plate moves underneath the other) or a continental collision (if the two plates contain continental crust). Deep marine trenches are typically associated with subduction zones, and the basins that develop along the active boundary are often called "foreland basins". The subducting slab contains many hydrous minerals, which release their water on heating; this water then causes the mantle to melt, producing volcanism. Examples of this are the Andes mountain range in South America and the Japanese island arc.
* Plate boundary zones occur where the effects of the interactions are unclear and the boundaries, usually occurring along a broad belt, are not well defined, and may show various types of movements in different episodes.

Plate tectonics is basically a kinematic phenomenon: Earth scientists agree upon the observation and deduction that the plates have moved one with respect to the other, and debate and find agreements on how and when. But still a major question remains on what the motor behind this movement is; the geodynamic mechanism, and here science diverges in different theories.

How mantle convection relates directly and indirectly to the motion of the plates is a matter of ongoing study and discussion in geodynamics. Somehow, this energy must be transferred to the lithosphere in order for tectonic plates to move. There are essentially two types of forces that are thought to influence plate motion: friction and gravity.
* Basal drag (friction): The plate motion is in this way driven by friction between the convection currents in the asthenosphere and the more rigid overlying floating lithosphere.
* Slab suction (gravity): Local convection currents exert a downward frictional pull on plates in subduction zones at ocean trenches. Slab suction may occur in a geodynamic setting wherein basal tractions continue to act on the plate as it dives into the mantle (although perhaps to a greater extent acting on both the under and upper side of the slab).

Lately, the convection theory is much debated as modern techniques based on 3D seismic tomography of imaging the internal structure of the Earth's mantle still fail to recognise these predicted large scale convection cells. Therefore, alternative views have been proposed:

In the theory of plume tectonics developed during the 1990s, a modified concept of mantle convection currents is used, related to super plumes rising from the deeper mantle which would be the drivers or the substitutes of the major convection cells. These ideas, which find their roots in the early 1930s with the so-called "fixistic" ideas of the European and Russian Earth Science Schools, find resonance in the modern theories which envisage hot spots/mantle plumes in the mantle which remain fixed and are overridden by oceanic and continental lithosphere plates during time, and leave their traces in the geological record (though these phenomena are not invoked as real driving mechanisms, but rather as a modulator). The modern theories that continue building on the older mantle doming concepts and see the movements of the plates a secondary phenomena, are beyond the scope of this page and are discussed elsewhere for example on the plume tectonics page.

Another suggestion is that the mantle flows neither in cells nor large plumes, but rather as a series of channels just below the Earth's crust which then provide basal friction to the lithosphere. This theory is called "surge tectonics" and became quite popular in geophysics and geodynamics during the 1980s and 1990s.

Gravitational sliding away from mantle doming: According to older theories one of the driving mechanisms of the plates is the existence of large scale asthenosphere/mantle domes, which cause the gravitational sliding of lithosphere plates away from them. This gravitational sliding represents a secondary phenomenon of this, basically vertically oriented mechanism. This can act on various scales, from the small scale of one island arc up to the larger scale of an entire ocean basin.

The actual vector of a plate's motion must necessarily be a function of all the forces acting upon the plate. However, therein remains the problem regarding what degree each process contributes to the motion of each tectonic plate.

The diversity of geodynamic settings and properties of each plate must clearly result in differences in the degree to which such processes are actively driving the plates. One method of dealing with this problem is to consider the relative rate at which each plate is moving and to consider the available evidence of each driving force upon the plate as far as possible.

Plate tectonics is the main current theory in Earth Sciences regarding the development of our planet Earth. It is, therefore, appropriate to dedicate some space to explain how the Earth Science community, step by step, has built this theory, from early speculations, through the gathering of proof and severe debates, up to the refinement and quantification, and still ongoing confrontations with alternative ideas.

In the late 19th and early 20th centuries, geologists assumed that the Earth's major features were fixed, and that most geologic features such as basin development and mountain ranges could be explained by vertical crustal movement, described in what is called the geosynclinal theory. Generally, this was placed in the context of a contracting planet Earth due to heat loss in the course of a relatively short geological time.

It was observed as early as 1596 that the opposite coasts of the Atlantic Ocean—or, more precisely, the edges of the continental shelves—have similar shapes and seem to have once fitted together.

Since that time many theories were proposed to explain this apparent complementarity, but the assumption of a solid Earth made these various proposals difficult to accept.

As it was observed early that although granite existed on continents, seafloor seemed to be composed of denser basalt, the prevailing concept during the first half of the twentieth century was that there were two types of crust, named "sial" (continental type crust), and "sima" (oceanic type crust). Furthermore, it was supposed that a static shells of strata was present under the continents. It therefore looked apparent that a layer of basalt (sial) underlies the continental rocks.

As more and more of the seafloor was mapped during the 1950s, the magnetic variations turned out not to be random or isolated occurrences, but instead revealed recognizable patterns. When these magnetic patterns were mapped over a wide region, the ocean floor showed a zebra-like pattern: one stripe with normal polarity and the adjoining stripe with reversed polarity. The overall pattern, defined by these alternating bands of normally and reversely polarized rock, became known as magnetic striping, and was published by Ron G. Mason and co-workers in 1961, who didn't find, though, an explanation for these data in terms of sea floor spreading, like Vine, Matthews and Morley a few years later.

After all these considerations, Plate Tectonics (or, as it was initially called "New Global Tectonics") became quickly accepted in the scientific world, and numerous papers followed that defined the concepts:
* In 1965, Tuzo Wilson who had been a promotor of the sea floor spreading hypothesis and continental drift from the very beginning added the concept of transform faults to the model, completing the classes of fault types necessary to make the mobility of the plates on the globe work out.
* A symposium on continental drift was held at the Royal Society of London in 1965 which must be regarded as the official start of the acceptance of plate tectonics by the scientific community, and which abstracts are issued as Blacket, Bullard & Runcorn (1965). In this symposium, Edward Bullard and co-workers showed with a computer calculation how the continents along both sides of the Atlantic would best fit to close the ocean, which became known as the famous "Bullard's Fit".
* In 1966 Wilson published the paper that referred to previous plate tectonic reconstructions, introducing the concept of what is now known as the "Wilson Cycle".
* In 1967, at the American Geophysical Union's meeting, W. Jason Morgan proposed that the Earth's surface consists of 12 rigid plates that move relative to each other.
* Two months later, Xavier Le Pichon published a complete model based on 6 major plates with their relative motions, which marked the final acceptance by the scientific community of plate tectonics.
* In the same year, McKenzie and Parker independently presented a model similar to Morgan's using translations and rotations on a sphere to define the plate motions.

The appearance of plate tectonics on terrestrial planets is related to planetary mass, with more massive planets than Earth expected to exhibit plate tectonics. Earth may be a borderline case, owing its tectonic activity to abundant water (Silica and water form a deep eutectic.)

Venus shows no evidence of active plate tectonics. There is debatable evidence of active tectonics in the planet's distant past; however, events taking place since then (such as the plausible and generally accepted hypothesis that the Venusian lithosphere has thickened greatly over the course of several hundred million years) has made constraining the course of its geologic record difficult. However, the numerous well-preserved impact craters have been utilized as a dating method to approximately date the Venusian surface (since there are thus far no known samples of Venusian rock to be dated by more reliable methods). Dates derived are dominantly in the range c. 500 to 750 Ma, although ages of up to c. 1.2 Ga have been calculated. This research has led to the fairly well accepted hypothesis that Venus has undergone an essentially complete volcanic resurfacing at least once in its distant past, with the last event taking place approximately within the range of estimated surface ages. While the mechanism of such an impressive thermal event remains a debated issue in Venusian geosciences, some scientists are advocates of processes involving plate motion to some extent.

In the 1990s, it was proposed that Martian Crustal Dichotomy was created by plate tectonic processes. Scientists today disagree, and believe that it was created either by upwelling within the Martian mantle that thickened the crust of the Southern Highlands and formed Tharsis or by a giant impact that excavated the Northern Lowlands.
 
Kansas-City-Star-asbaquez © 2010 | Designed by Chica Blogger | Back to top