Showing posts with label Energy. Show all posts
Showing posts with label Energy. Show all posts

Technetium | History and definition of the Technetium | technetium 99m | technetium element

Technetium is the chemical element with atomic number 43 and symbol Tc. It is the lowest atomic number element without any stable isotopes; every form of it is radioactive. Nearly all technetium is produced synthetically and only minute amounts are found in nature. Naturally occurring technetium occurs as a spontaneous fission product in uranium ore or by neutron capture in molybdenum ores. The chemical properties of this silvery gray, crystalline transition metal are intermediate between rhenium and manganese.

Many of technetium's properties were predicted by Dmitri Mendeleev before the element was discovered. Mendeleev noted a gap in his periodic table and gave the undiscovered element the provisional name ekamanganese (Em). In 1937 technetium (specifically the technetium-97 isotope) became the first predominantly artificial element to be produced, hence its name (from the Greek τεχνητός, meaning "artificial").

Its short-lived gamma ray-emitting nuclear isomer—technetium-99m—is used in nuclear medicine for a wide variety of diagnostic tests. Technetium-99 is used as a gamma ray-free source of beta particles. Long-lived technetium isotopes produced commercially are by-products of fission of uranium-235 in nuclear reactors and are extracted from nuclear fuel rods. Because no isotope of technetium has a half-life longer than 4.2 million years (technetium-98), its detection in red giants in 1952, which are billions of years old, helped bolster the theory that stars can produce heavier elements.

From the 1860s through 1871, early forms of the periodic table proposed by Dimitri Mendeleev contained a gap between molybdenum (element 42) and ruthenium (element 44). In 1871, Mendeleev predicted this missing element would occupy the empty place below manganese and therefore have similar chemical properties. Mendeleev gave it the provisional name ekamanganese (eka- from the Sanskrit words for one), because the predicted element was one place down from the known element manganese.

Many early researchers, both before and after the periodic table was published, were eager to be the first to discover and name the missing element; its location in the table suggested that it should be easier to find than other undiscovered elements. It was first thought to have been found in platinum ores in 1828 and was given the name polinium, but turned out to be impure iridium. Then, in 1846, the element ilmenium was claimed to have been discovered, but later was determined to be impure niobium. This mistake was repeated in 1847 with the "discovery" of pelopium.

In 1877, the Russian chemist Serge Kern reported discovering the missing element in platinum ore. Kern named what he thought was the new element davyum (after the noted English chemist Sir Humphry Davy), but it was eventually determined to be a mixture of iridium, rhodium and iron. Another candidate, lucium, followed in 1896, but it was determined to be yttrium. Then in 1908, the Japanese chemist Masataka Ogawa found evidence in the mineral thorianite, which he thought indicated the presence of element 43. Ogawa named the element nipponium, after Japan (which is Nippon in Japanese). In 2004, H. K Yoshihara used "a record of X-ray spectrum of Ogawa's nipponium sample from thorianite [which] was contained in a photographic plate preserved by his family. The spectrum was read and indicated the absence of the element 43 and the presence of the element 75 (rhenium)."

German chemists Walter Noddack, Otto Berg, and Ida Tacke reported the discovery of element 75 and element 43 in 1925, and named element 43 masurium (after Masuria in eastern Prussia, now in Poland, the region where Walter Noddack's family originated). The group bombarded columbite with a beam of electrons and deduced element 43 was present by examining X-ray diffraction spectrograms. The wavelength of the X-rays produced is related to the atomic number by a formula derived by Henry Moseley in 1913. The team claimed to detect a faint X-ray signal at a wavelength produced by element 43. Later experimenters could not replicate the discovery, and it was dismissed as an error for many years. Still, in 1933, a series of articles on the discovery of elements quoted the name masurium for element 43. Debate still exists as to whether the 1925 team actually did discover element 43.

The discovery of element 43 was finally confirmed in a December 1936 experiment at the University of Palermo in Sicily conducted by Carlo Perrier and Emilio Segrè. In mid-1936, Segrè visited the United States, first Columbia University in New York and then the Lawrence Berkeley National Laboratory in California. He persuaded cyclotron inventor Ernest Lawrence to let him take back some discarded cyclotron parts that had become radioactive. Lawrence mailed him a molybdenum foil that had been part of the deflector in the cyclotron.

Segrè enlisted his colleague Perrier to attempt to prove, through comparative chemistry, that the molybdenum activity was indeed Z = 43. They succeeded in isolating the isotopes technetium-95 and technetium-97. University of Palermo officials wanted them to name their discovery "panormium", after the Latin name for Palermo, Panormus. In 1947 element 43 was named after the Greek word τεχνητός, meaning "artificial", since it was the first element to be artificially produced. Segrè returned to Berkeley and met Glenn T. Seaborg. They isolated the metastable isotope technetium-99m, which is now used in some ten million medical diagnostic procedures annually.

In 1952, astronomer Paul W. Merrill in California detected the spectral signature of technetium (in particular, light with wavelength of 403.1 nm, 423.8 nm, 426.8 nm, and 429.7 nm) in light from S-type red giants. The stars were near the end of their lives, yet were rich in this short-lived element, meaning nuclear reactions within the stars must be producing it. This evidence was used to bolster the then-unproven theory that stars are where nucleosynthesis of the heavier elements occurs. More recently, such observations provided evidence that elements were being formed by neutron capture in the s-process.

Since its discovery, there have been many searches in terrestrial materials for natural sources of technetium. In 1962, technetium-99 was isolated and identified in pitchblende from the Belgian Congo in extremely small quantities (about 0.2 ng/kg); there it originates as a spontaneous fission product of uranium-238. There is also evidence that the Oklo natural nuclear fission reactor produced significant amounts of technetium-99, which has since decayed into ruthenium-99.

The long half-life of technetium-99 and its ability to form an anionic species makes it a major concern for long-term disposal of radioactive waste. Many of the processes designed to remove fission products in reprocessing plants aim at cationic species like caesium (e.g., caesium-137) and strontium (e.g., strontium-90). Hence the pertechnetate is able to escape through these treatment processes. Current disposal options favor burial in continental, geologically stable rock. The primary danger with such a course is that the waste is likely to come into contact with water, which could leach radioactive contamination into the environment. The anionic pertechnetate and iodide do not adsorb well onto the surfaces of minerals, so they are likely to be washed away. By comparison plutonium, uranium, and caesium are much more able to bind to soil particles. For this reason, the environmental chemistry of technetium is an active area of research.

An alternative disposal method, transmutation, has been demonstrated at CERN for technetium-99. This transmutation process is one in which the technetium (technetium-99 as a metal target) is bombarded with neutrons to form the short-lived technetium-100 (half life = 16 seconds) which decays by beta decay to ruthenium-100. If recovery of usable ruthenium is a goal, an extremely pure technetium target is needed; if small traces of the minor actinides such as americium and curium are present in the target, they are likely to undergo fission and form more fission products which increase the radioactivity of the irradiated target. The formation of ruthenium-106 (half-life 374 days) from the 'fresh fission' is likely to increase the activity of the final ruthenium metal, which will then require a longer cooling time after irradiation before the ruthenium can be used.

The actual production of technetium-99 from spent nuclear fuel is a long process. During fuel reprocessing, it appears in the waste liquid, which is highly radioactive. After sitting for several years, the radioactivity falls to a point where extraction of the long-lived isotopes, including technetium-99, becomes feasible. Several chemical extraction processes are then used, yielding technetium-99 metal of high purity.

Technetium star

A Technetium star, or more properly a Tc-rich star, is a star whose stellar spectrum contains absorption lines of the light radioactive metal technetium. The most stable isotope of technetium is 98Tc with a half-life of 4.2 million years, which is too short a time to allow the metal to be material from before the star's creation. Therefore, the detection in 1952 of technetium in stellar spectra provided unambiguous proof of nucleosynthesis in stars, one of the more extreme cases being R Geminorum.

Stars containing technetium belong to the class of so-called asymptotic giant branch stars (AGB) — stars that are like red giants, but with a slightly higher luminosity, and which burn hydrogen in an inner shell. Members of this class of stars switch to helium shell burning with an interval of some 100,000 years, in so-called "dredge-ups". Technetium stars belong to the classes M, MS, S, SC and C-N. They are most often variable stars of the long period variable types.

Current research indicate that the presence of technetium in AGB stars occurs after some evolution, and that a significant amount of these stars do not exhibit the metal in their spectra. The presence of technetium seems to be related to the so-called "third dredge-up" in the history of the stars.

Biomass | Understanding and definition of the Biomass | a renewable energy source

Biomass, a renewable energy source, is biological material from living, or recently living organisms, such as wood, waste, (hydrogen) gas, and alcohol fuels. Biomass is commonly plant matter grown to generate electricity or produce heat. In this sense, living biomass can also be included, as plants can also generate electricity while still alive. The most conventional way in which biomass is used, however, still relies on direct incineration. Forest residues, for example (such as dead trees, branches and tree stumps), yard clippings, wood chips and garbage are often used for this. However, biomass also includes plant or animal matter used for production of fibers or chemicals. Biomass may also include biodegradable wastes that can be burnt as fuel. It excludes such organic materials as fossil fuels, which have been transformed by geological processes into substances such as coal or petroleum.

Industrial biomass can be grown from numerous types of plants, including miscanthus, switchgrass, hemp, corn, poplar, willow, sorghum, sugarcane, and a variety of tree species, ranging from eucalyptus to oil palm (palm oil). The particular plant used is usually not important to the end products, but it does affect the processing of the raw material.

Although fossil fuels have their origin in ancient biomass, they are not considered biomass by the generally accepted definition because they contain carbon that has been "out" of the carbon cycle for a very long time. Their combustion therefore disturbs the carbon dioxide content in the atmosphere.

Biomass is carbon, hydrogen and oxygen based. Nitrogen and small quantities of other atoms, including alkali, alkaline earth and heavy metals can be found as well. Metals are often found in functional molecules such as the porphyrins which include chlorophyll which contains magnesium.

Plants in particular combine water and carbon dioxide to sugar building blocks. The required energy is produced from light via photosynthesis based on chlorophyll. On average, between 0.1 and 1 % of the available light is stored as chemical energy in plants. The sugar building blocks are the starting point for the major fractions found in all terrestrial plants, lignin, hemicellulose and cellulose.

Biomass energy is derived from five distinct energy sources: garbage, wood, waste, landfill gases, and alcohol fuels. Wood energy is derived both from direct use of harvested wood as a fuel and from wood waste streams. The largest source of energy from wood is pulping liquor or “black liquor,” a waste product from processes of the pulp, paper and paperboard industry. Waste energy is the second-largest source of biomass energy. The main contributors of waste energy are municipal solid waste (MSW), manufacturing waste, and landfill gas. Biomass alcohol fuel, or ethanol, is derived primarily from sugarcane and corn. It can be used directly as a fuel or as an additive to gasoline.

Biomass can be converted to other usable forms of energy like methane gas or transportation fuels like ethanol and biodiesel. Rotting garbage, and agricultural and human waste, release methane gas - also called "landfill gas" or "biogas." Crops like corn and sugar cane can be fermented to produce the transportation fuel, ethanol. Biodiesel, another transportation fuel, can be produced from left-over food products like vegetable oils and animal fats. Also, Biomass to liquids (BTLs) and cellulosic ethanol are still under research.

The biomass used for electricity production ranges by region. Forest by products, such as wood residues, are popular in the United Sates. Agricultural waste is common in Mauritius (sugar cane residue) and Southeast Asia (rice husks). Animal husbandry residues, such as poultry litter, is popular in the UK.

There are a number of technological options available to make use of a wide variety of biomass types as a renewable energy source. Conversion technologies may release the energy directly, in the form of heat or electricity, or may convert it to another form, such as liquid biofuel or combustible biogas. While for some classes of biomass resource there may be a number of usage options, for others there may be only one appropriate technology.

These are processes in which heat is the dominant mechanism to convert the biomass into another chemical form. The basic alternatives of combustion, torrefaction, pyrolysis, and gasification are separated principally by the extent to which the chemical reactions involved are allowed to proceed (mainly controlled by the availability of oxygen and conversion temperature).

There are a number of other less common, more experimental or proprietary thermal processes that may offer benefits such as hydrothermal upgrading (HTU) and hydroprocessing. Some have been developed for use on high moisture content biomass, including aqueous slurries, and allow them to be converted into more convenient forms. Some of the applications of thermal conversion are combined heat and power (CHP) and co-firing. In a typical biomass power plant, efficiencies range from 20-27%.

A range of chemical processes may be used to convert biomass into other forms, such as to produce a fuel that is more conveniently used, transported or stored, or to exploit some property of the process itself.

As biomass is a natural material, many highly efficient biochemical processes have developed in nature to break down the molecules of which biomass is composed, and many of these biochemical conversion processes can be harnessed.

Biochemical conversion makes use of the enzymes of bacteria and other micro-organisms to break down biomass. In most cases micro-organisms are used to perform the conversion process: anaerobic digestion, fermentation and composting. Other chemical processes such as converting straight and waste vegetable oils into biodiesel is transesterification. Another way of breaking down biomass is by breaking down the carbohydrates and simple sugars to make alcohol. However, this process has not been perfected yet. Scientists are still researching the effects of converting biomass.

The existing biomass power generating industry in the United States, which consists of approximately 11,000 MW of summer operating capacity actively supplying power to the grid, produces about 1.4 percent of the U.S. electricity supply.

Currently, the New Hope Power Partnership is the largest biomass power plant in North America. The 140 MW facility uses sugar cane fiber (bagasse) and recycled urban wood as fuel to generate enough power for its large milling and refining operations as well as to supply renewable electricity for nearly 60,000 homes. The facility reduces dependence on oil by more than one million barrels per year, and by recycling sugar cane and wood waste, preserves landfill space in urban communities in Florida.

Using biomass as a fuel produces air pollution in the form of carbon monoxide, NOx (nitrogen oxides), VOCs (volatile organic compounds), particulates and other pollutants, in some cases at levels above those from traditional fuel sources such as coal or natural gas. Black carbon - a pollutant created by incomplete combustion of fossil fuels, biofuels, and biomass - is possibly the second largest contributor to global warming. In 2009 a Swedish study of the giant brown haze that periodically covers large areas in South Asia determined that it had been principally produced by biomass burning, and to a lesser extent by fossil-fuel burning. Researchers measured a significant concentration of 14C, which is associated with recent plant life rather than with fossil fuels.

Biomass power plant size is often driven by biomass availability in close proximity as transport costs of the (bulky) fuel play a key factor in the plant's economics. It has to be noted, however, that rail and especially shipping on waterways can reduce transport costs significantly, which has led to a global biomass market. To make small plants of 1 MWel economically profitable those power plants have need to be equipped with technology that is able to convert biomass to useful electricity with high efficiency such as ORC technology, a cycle similar to the water steam power process just with an organic working medium. Such small power plants can be found in Europe.

On combustion, the carbon from biomass is released into the atmosphere as carbon dioxide (CO2). The amount of carbon stored in dry wood is approximately 50% by weight. When from agricultural sources, plant matter used as a fuel can be replaced by planting for new growth. When the biomass is from forests, the time to recapture the carbon stored is generally longer, and the carbon storage capacity of the forest may be reduced overall if destructive forestry techniques are employed.

Despite harvesting, biomass crops may sequester carbon. So for example soil organic carbon has been observed to be greater in switchgrass stands than in cultivated cropland soil, especially at depths below 12 inches. The grass sequesters the carbon in its increased root biomass. Typically, perennial crops sequester much more carbon than annual crops due to much greater non-harvested living biomass, both living and dead, built up over years, and much less soil disruption in cultivation.

The biomass-is-carbon-neutral proposal put forward in the early 1990s has been superseded by more recent science that recognizes that mature, intact forests sequester carbon more effectively than cut-over areas. When a tree’s carbon is released into the atmosphere in a single pulse, it contributes to climate change much more than woodland timber rotting slowly over decades. Current studies indicate that "even after 50 years the forest has not recovered to its initial carbon storage" and "the optimal strategy is likely to be protection of the standing forest".

International Energy Agency | History and definition of the International Energy Agency

The International Energy Agency (IEA; French: Agence internationale de l'énergie) is a Paris-based autonomous intergovernmental organization established in the framework of the Organisation for Economic Co-operation and Development (OECD) in 1974 in the wake of the 1973 oil crisis. The IEA was initially dedicated to responding to physical disruptions in the supply of oil, as well as serving as an information source on statistics about the international oil market and other energy sectors.

The IEA acts as a policy adviser to its member states, but also works with non-member countries, especially China, India and Russia. The Agency's mandate has broadened to focus on the "3Es" of sound energy policy: energy security, economic development, and environmental protection. The latter has focused on mitigating climate change. The IEA has a broad role in promoting alternate energy sources (including renewable energy), rational energy policies, and multinational energy technology co-operation.

IEA member countries are required to maintain total oil stock levels equivalent to at least 90 days of the previous year's net imports. At the end of July 2009, IEA member countries held a combined stockpile of almost 4,300,000,000 barrels (680,000,000 m3) of oil.

The Executive Director of the IEA is Nobuo Tanaka. The Deputy Executive Director is Richard Jones. On March 11, 2011, Former Dutch Minister of Economic Affairs, Maria van der Hoeven, was selected as the next Executive Director. She will take the office starting 1 September 2011.

The IEA was established to meet the industrial countries' energy organization needs in the wake of the 1973–1974 oil crisis. Although the OECD had structures for dealing with energy questions such as the Council, the Executive Committee, the Oil Committee, and the Energy Committee, it could not respond effectively to the crisis. The OECD had adopted the Oil Apportionment Decision [C(72)201(Final)], laying out procedures to be carried out in the event of an oil supply emergency in Europe, but these procedures were not implemented during the crisis. In addition, the OECD had adopted recommendations on oil stockpiling in Europe. Due to their limited scope, these measures could have only a limited role in an oil supply emergency.

Establishment of the new organization was proposed by United States Secretary of State Henry Kissinger in his address to the Pilgrims Society in London on 12 December 1973. Also in December 1973, at the summit of the European Communities in Copenhagen, Danish Prime Minister Anker Jørgensen who chaired the summit, declared that the summit find "useful to study with other oil-consuming countries within the framework of the OECD ways of dealing with the common short and long term energy problems of consumer countries". At the Washington Energy Conference on 11-13 February 1974, the ministers of thirteen principal oil consumer countries stated "the need for a comprehensive action program to deal with all facets of the world energy situation by cooperative measures. In so doing they will build on the work of the OECD".

While creating a new energy organization, it was decided to utilize the framework of the OECD as it had experiences in dealing with oil and other energy questions, had expertise in economic analysis and statistics, had established staff, physical facilities, legal status and privileges andimmunities, and was the principal organization of the industrial countries. However, as the OECD has a rule of unanimity and not all member states were not ready to participate. Therefore, instead of an integrated approach, an autonomous approach was chosen.

The IEA was created on 18 November 1974 by the Agreement on an International Energy Program" (I.E.P. Agreement).

During its history, the IEA had intervened oil markets two times releasing oil stocks — in 1991 during Gulf War, and in 2005 for a month after Hurricane Katrina affected USA production by releasing 2 million barrels per day (320×10^3 m3/d).

At the Heiligendamm Summit in June 2007, the G8 acknowledged an EU proposal for an international initiative on energy efficiency tabled in March 2007, and agreed to explore, together with the International Energy Agency, the most effective means to promote energy efficiency internationally. A year later, on 8 June 2008, the G8 countries, China, India, South Korea and the European Community decided to establish the International Partnership for Energy Efficiency Cooperation, at the Energy Ministerial meeting hosted by Japan in the frame of the 2008 G8 Presidency, in Aomori.

Guy Pearse states that the IEA has consistently underestimated the potential for renewable energy alternatives.

The Energy Watch Group (EWG), a coalition of scientists and politicians which analyses official energy industry predictions, claims that the IEA has had an institutional bias towards traditional energy sources and has been using "misleading data" to undermine the case for renewable energy, such as wind and solar. A 2008 EWG report compares IEA projections about the growth of wind power capacity and finds that it has consistently underestimated the amount of energy the wind power industry can deliver.

For example, in 1998, the IEA predicted global wind electricity generation would total 47.4 GW by 2020, but EWG's report states that this level was reached by the end of 2004. The report also said that the IEA has not learned the lesson of previous underestimates, and last year net additions of wind power globally were four times greater than the average IEA estimate from its 1995-2004 predictions.

Amid discontent from across the renewables sector at the IEA's performance as a global energy watchdog, the International Renewable Energy Agency was formed on January 26, 2009. The aim is to have the agency fully operational by 2010 with an initial annual budget of €25m.

The IEA Photovoltaic Power Systems Programme (PVPS) is one of the collaborative R&D Agreements established within the IEA and, since its establishment in 1993, the PVPS participants have been conducting a variety of joint projects in the application of photovoltaic conversion of solar energy into electricity.

Ahead of the launch of the 2009 World Energy Outlook, the British daily newspaper The Guardian, referring to an unidentified senior IEA official, alleged that the agency was deliberately downplaying the risk of peak oil under pressures from the USA. According to a second unidentified former senior IEA official it was "imperative not to anger the Americans" and that the world has already entered the 'peak oil' zone.

The Guardian also referred to a team of scientists from Uppsala University in Sweden who studied the 2008 World Energy Outlook and concluded the forecasts of the IEA were unattainable. According to their peer-reviewed report, oil production in 2030 would not exceed 75 million barrels per day (11.9×10^6 m3/d) while the IEA forecasts a production of 105 million barrels per day (16.7×10^6 m3/d). The lead author of the report, Dr. Kjell Aleklett, has claimed that IEA's reports are "political documents".

The anticorruption NGO Global Witness wrote in its report Heads in the Sand that "Global Witness' analysis demonstrates that the Agency continues to retain an overly-optimistic, and therefore misleading, view about potential future oil production." According to Global Witness, "the Agency's over-confidence, despite credible data, external analysis and underlying fundamentals all strongly suggesting a more precautionary approach, has had a disastrous global impact."

Gas | Understanding and definition of Gas

Gas is one of the three classical states of matter. Near absolute zero, a substance exists as a solid. As heat is added to this substance it melts into a liquid at its melting point (see phase change), boils into a gas at its boiling point, and if heated high enough would enter a plasma state in which the electrons are so energized that they leave their parent atoms from within the gas. A pure gas may be made up of individual atoms (e.g. a noble gas or atomic gas like neon), elemental molecules made from one type of atom (e.g. oxygen), or compound molecules made from a variety of atoms (e.g. carbon dioxide). A gas mixture would contain a variety of pure gases much like the air. What distinguishes a gas from liquids and solids is the vast separation of the individual gas particles. This separation usually makes a colorless gas invisible to the human observer. The interaction of gas particles in the presence of electric and gravitational fields are considered negligible as indicated by the constant velocity vectors in the image.

The gaseous state of matter is found between the liquid and plasma states, the latter of which provides the upper temperature boundary for gases. Bounding the lower end of the temperature scale lie degenerative quantum gases which are gaining increased attention these days. High-density atomic gases super cooled to incredibly low temperatures are classified by their statistical behavior as either a Bose gas or a Fermi gas. For a comprehensive listing of these exotic states of matter see list of states of matter.

The word "gas" was invented by Jan Baptist van Helmont, perhaps as a Dutch pronunciation re-spelling of "chaos".

An equation of state (for gases) is a mathematical model used to roughly describe or predict the state properties of a gas. At present, there is no single equation of state that accurately predicts the properties of all gases under all conditions. Therefore, a number of much more accurate equations of state have been developed for gases in specific temperature and pressure ranges. The "gas models" that are most widely discussed are "perfect gas", "ideal gas" and "real gas". Each of these models has its own set of assumptions to facilitate the analysis of a given thermodynamic system. Each successive model expands the temperature range of coverage to which it applies. The image of first powered flight at Kitty Hawk, North Carolina illustrates one example on the successful application of these relationships in 1903. More recent examples include the 2009 maiden flights of the first solar powered aircraft, the Solar Impulse, and the first commercial airliner to be constructed primarily from composite materials, the Dreamliner.

As most gases are difficult to observe directly with our senses, they are described through the use of four physical properties or macroscopic characteristics: the gas’s pressure, volume, number of particles (chemists group them by moles), and temperature. These four characteristics were repeatedly observed by men such as Robert Boyle, Jacques Charles, John Dalton, Joseph Gay-Lussac and Amedeo Avogadro for a variety of gases in a great many settings. Their detailed studies ultimately led to a mathematical relationship among these properties expressed by the ideal gas law (see simplified models section below).

Gas particles are widely separated from one another, and as such are not as strongly intermolecularly bonded to the same degree as liquids or solids. These intermolecular forces result from electrostatic interactions between each gas particle. Like charged areas of different gas particles repel, while oppositely charged regions of different gas particles attract one another; gases that contain permanently charged ions are known as plasmas. Gaseous compounds with polar covalent bonds contain permanent charge imbalances and so experience relatively strong intermolecular forces, although the molecule while the compound's net charge remains neutral. Transient, randomly-induced charges exist across non-polar covalent bonds of molecules and electrostatic interactions caused by them are referred to as van der Waals forces. The interaction of these intermolecular forces varies within a substance which determines many of the physical properties unique to each gas. A quick comparison of boiling points for compounds formed by ionic and covalent bonds leads us to this conclusion. The drifting smoke particles in the image provides some insight into low pressure gas behavior.

Compared to the other states of matter, gases have an incredibly low density and viscosity. Pressure and temperature influence the particles within a certain volume. This variation in particle separation and speed is referred to as compressibility. This particle separation and size influences optical properties of gases as can be found in the following list of refractive indices. Finally, gas particles spread apart or diffuse in order to homogeneously distribute themselves throughout any container.

When observing a gas, it is typical to specify a frame of reference or length scale. A larger length scale corresponds to a macroscopic or global point of view of the gas. This region (referred to as a volume) must be sufficient in size to contain a large sampling of gas particles. The resulting statistical analysis of this sample size produces the "average" behavior (i.e. velocity, temperature or pressure) of all the gas particles within the region. By way of contrast, a smaller length scale corresponds to a microscopic or particle point of view.

From this global vantage point, the gas characteristics measured are either in terms of the gas particles themselves (velocity, pressure, or temperature) or their surroundings (volume). By way of example, Robert Boyle studied pneumatic chemistry for a small portion of his career. One of his experiments related the macroscopic properties of pressure and volume of a gas. His experiment used a J-tube manometer which looks like a test tube in the shape of the letter J. Boyle trapped an inert gas in the closed end of the test tube with a column of mercury, thereby locking the number of particles and temperature. He observed that when the pressure was increased on the gas, by adding more mercury to the column, the trapped gas volume decreased. Mathematicians describe this situation as an inverse relationship. Furthermore, when Boyle multiplied the pressure and volume of each observation, the product (math) was always the same, a constant. This relationship held true for every gas that Boyle observed leading to the law, (PV=k), named to honor his work in this field of study.

There are many math tools to choose from when analyzing gas properties. As gases are subjected to extreme conditions, the math tools become a bit more complex, from the Euler equations (inviscid flow) to the Navier-Stokes equations that fully account for viscous effects. These equations are tailored to meet the unique conditions of the gas system in question. Boyle's lab equipment allowed the use of algebra to obtain his analytical results. His results were possible because he was studying gases in relatively low pressure situations where they behaved in an "ideal" manner. These ideal relationships enable safety calculations for a variety of flight conditions on the materials in use. The high technology equipment in use today was designed to help us safely explore the more exotic operating environments where the gases no longer behave in an "ideal" manner. This advanced math, to include statistics and multivariable calculus, makes possible the solution to such complex dynamic situations as space vehicle reentry. One such example might be the analysis of the image depicting space shuttle reentry to ensure the material properties under this loading condition are not exceeded. It is safe to say that in this flight regime, the gas is no longer behaving ideally.

The symbol used to represent pressure in equations is "p" or "P" with SI units of pascals.

When describing a container of gas, the term pressure (or absolute pressure) refers to the average force the gas exerts on the surface area of the container. Within this volume, it is sometimes easier to visualize the gas particles moving in straight lines until they collide with the container (see diagram at top of the article). The force imparted by a gas particle into the container during this collision is the change in momentum of the particle. As a reminder from classical mechanics, momentum, by definition, is the product of mass and velocity. Notice that during a collision only the normal component of velocity changes. A particle traveling parallel to the wall never changes its momentum. So the average force on a surface must be the average change in linear momentum from all of these gas particle collisions. To be more precise, pressure is the sum of all the normal components of force exerted by the particles impacting the walls of the container divided by the surface area of the wall. The image "Pressurized gases" depicts gas pressure and temperature spikes used in the entertainment industry.

The symbol used to represent temperature in equations is T with SI units of kelvins.

The speed of a gas particle is proportional to its absolute temperature. The volume of the balloon in the video shrinks when the trapped gas particles slow down with the addition of extremely cold nitrogen. The temperature of any physical system is related to the motions of the particles (molecules and atoms) which make up the [gas] system. In statistical mechanics, temperature is the measure of the average kinetic energy stored in a particle. The methods of storing this energy are dictated by the degrees of freedom of the particle itself (energy modes). Kinetic energy added (endothermic process) to gas particles by way of collisions produces linear, rotational, and vibrational motion as well. By contrast, a molecule in a solid can only increase its vibration modes with the addition of heat as the lattice crystal structure prevents both linear and rotational motions. These heated gas molecules have a greater speed range which constantly varies due to constant collisions with other particles. The speed range can be described by the Maxwell-Boltzmann distribution. Use of this distribution implies ideal gases near thermodynamic equilibrium for the system of particles being considered.

The symbol used to represent volume in equations is "V" with SI units of cubic meters.

When performing a thermodynamic analysis, it is typical to speak of intensive and extensive properties. Properties which depend on the amount of gas (either by mass or volume) are called extensive properties, while properties that do not depend on the amount of gas are called intensive properties. Specific volume is an example of an intensive property because it is the ratio of volume occupied by a unit of mass of a gas that is identical throughout a system at equilibrium. 1000 atoms of protactinium as a gas occupy the same space as any other 1000 atoms for any given temperature and pressure. This concept is easier to visualize for solids such as iron which are incompressible compared to gases. When the seat ejection is initiated in the rocket sled image the specific volume increases with the expanding gases, while mass is conserved. Since a gas fills any container in which it is placed, volume is an extensive property.

The symbol used to represent density in equations is ρ (pronounced rho) with SI units of kilograms per cubic meter. This term is the reciprocal of specific volume.

Since gas molecules can move freely within a container, their mass is normally characterized by density. Density is the mass per volume of a substance or simply, the inverse of specific volume. For gases, the density can vary over a wide range because the particles are free to move closer together when constrained by pressure or volume or both. This variation of density is referred to as compressibility. Like pressure and temperature, density is a state variable of a gas and the change in density during any process is governed by the laws of thermodynamics. For a static gas, the density is the same throughout the entire container. Density is therefore a scalar quantity; it is a simple physical quantity that has a magnitude but no direction associated with it. It can be shown by kinetic theory that the density is inversely proportional to the size of the container in which a fixed mass of gas is confined. In this case of a fixed mass, the density decreases as the volume increases.

If one could observe a gas under a powerful microscope, one would see a collection of particles (molecules, atoms, ions, electrons, etc.) without any definite shape or volume that are in more or less random motion. These neutral gas particles only change direction when they collide with another particle or the sides of the container. By stipulating that these collisions are perfectly elastic, this substance is transformed from a real to an ideal gas. This particle or microscopic view of a gas is described by the Kinetic-molecular theory. All of the assumptions behind this theory can be found in the postulates section of Kinetic Theory.

Each one of the assumptions listed below adds to the complexity of the problem's solution. As the density of a gas increases with pressure rises, the intermolecular forces play a more substantial role in gas behavior which results in the ideal gas law no longer providing "reasonable" results. At the upper end of the engine temperature ranges (e.g. combustor sections – 1300 K), the complex fuel particles absorb internal energy by means of rotations and vibrations that cause their specific heats to vary from those of diatomic molecules and noble gases. At more than double that temperature, electronic excitation and dissociation of the gas particles begins to occur causing the pressure to adjust to a greater number of particles (transition from gas to plasma). Finally, all of the thermodynamic processes were presumed to describe uniform gases whose velocities varied according to a fixed distribution. Using a non-equilibrium situation implies the flow field must be characterized in some manner to enable a solution. One of the first attempts to expand the boundaries of the ideal gas law was to include coverage for different thermodynamic processes by adjusting the equation to read pVn = constant and then varying the n through different values such as the specific heat ratio, γ.

Phosphorus | Understanding and definition of Phosphorus | Function and use of phosphorus

Phosphorus is the chemical element that has the symbol P and atomic number 15. A multivalent nonmetal of the nitrogen group, phosphorus as a mineral is almost always present in its maximally oxidized state, as inorganic phosphate rocks. Elemental phosphorus exists in two major forms – white phosphorus and red phosphorus, but due to its high reactivity, phosphorus is never found as a free element on Earth.

The first form of elemental phosphorus to be produced (white phosphorus, in 1669) emits a faint glow upon exposure to oxygen – hence its name given from Greek mythology, Φωσφόρος meaning "light-bearer" (Latin Lucifer), referring to the "Morning Star", the planet Venus. Although the term "phosphorescence", meaning glow after illumination, derives from this property of phosphorus, the glow of phosphorus originates from oxidation of the white (but not red) phosphorus and should be called chemiluminescence.

Phosphorus compounds are used in explosives, nerve agents, friction matches, fireworks, pesticides, toothpastes, and detergents.

Phosphorus is a component of DNA, RNA, ATP, and also the phospholipids that form all cell membranes. It is thus an essential element for all living cells, and organisms tend to accumulate and concentrate it. For example, elemental phosphorus was historically first isolated from the sediment in human urine, and bone ash was an important early phosphate source. Low phosphate levels are an important limit to growth in some aquatic systems. Today, the most important commercial use of phosphorus-based chemicals is the production of fertilizers, to replace the phosphorus that plants remove from the soil.

Phosphorus has several forms (allotropes) that have strikingly different properties. The two most common allotropes are white phosphorus and red phosphorus. Red phosphorus is an intermediate phase between white and violet phosphorus. Another form, scarlet phosphorus, is obtained by allowing a solution of white phosphorus in carbon disulfide to evaporate in sunlight. Black phosphorus is obtained by heating white phosphorus under high pressures (about 12,000 standard atmospheres or 1.2 GPa). In appearance, properties, and structure, it resembles graphite, being black and flaky, a conductor of electricity, and has puckered sheets of linked atoms. Another allotrope is diphosphorus; it contains a phosphorus dimer as a structural unit and is highly reactive.

White phosphorus has two forms, low-temperature β form and high-temperature α form. They both contain a phosphorus P4 tetrahedron as a structural unit, in which each atom is bound to the other three atoms by a single bond. This P4 tetrahedron is also present in liquid and gaseous phosphorus up to the temperature of 800 °C when it starts decomposing to P2 molecules. White phosphorus is the least stable, the most reactive, more volatile, less dense, and more toxic than the other allotropes. The toxicity of white phosphorus led to its discontinued use in matches. White phosphorus is thermodynamically unstable at normal condition and will gradually change to red phosphorus. This transformation, which is accelerated by light and heat, makes white phosphorus almost always contain some red phosphorus and therefore appear yellow. For this reason, it is also called yellow phosphorus. It glows greenish in the dark (when exposed to oxygen), is highly flammable and pyrophoric (self-igniting) upon contact with air as well as toxic (causing severe liver damage on ingestion). Because of pyrophoricity, white phosphorus is used as an additive in napalm. The odour of combustion of this form has a characteristic garlic smell, and samples are commonly coated with white "(di)phosphorus pentoxide", which consists of P4O10 tetrahedra with oxygen inserted between the phosphorus atoms and at their vertices. White phosphorus is insoluble in water but soluble in carbon disulfide.

The white allotrope can be produced using several different methods. In one process, calcium phosphate, which is derived from phosphate rock, is heated in an electric or fuel-fired furnace in the presence of carbon and silica. Elemental phosphorus is then liberated as a vapour and can be collected under phosphoric acid. This process is similar to the first synthesis of phosphorus from calcium phosphate in urine.

In the red phosphorus, one of the P4 bonds is broken, and one additional bond is formed with a neighbouring tetrahedron resulting in a more chain-like structure. Red phosphorus may be formed by heating white phosphorus to 250 °C (482 °F) or by exposing white phosphorus to sunlight. Phosphorus after this treatment exists as an amorphous network of atoms that reduces strain and gives greater stability; further heating results in the red phosphorus becoming crystalline. Therefore red phosphorus is not a certain allotrope, but rather an intermediate phase between the white and violet phosphorus, and most of its properties have a range of values. Red phosphorus does not catch fire in air at temperatures below 260 °C, whereas white phosphorus ignites at about 30 °C.

Violet phosphorus is a thermodynamic stable form of phosphorus that can be produced by day-long temper of red phosphorus above 550 °C. In 1865, Hittorf discovered that when phosphorus was recrystallized from molten lead, a red/purple form is obtained. Therefore this form is sometimes known as "Hittorf's phosphorus" (or violet or α-metallic phosphorus).

Black phosphorus is the least reactive allotrope and the thermodynamic stable form below 550 °C. It is also known as β-metallic phosphorus and has a structure somewhat resembling that of graphite. High pressures are usually required to produce black phosphorus, but it can also be produced at ambient conditions using metal salts as catalysts.

The diphosphorus allotrope, P2, is stable only at high temperatures. The dimeric unit contains a triple bond and is analogous to N2. The diphosphorus allotrope (P2) can be obtained normally only under extreme conditions (for example, from P4 at 1100 kelvin). Nevertheless, some advancements were obtained in generating the diatomic molecule in homogeneous solution, under normal conditions with the use by some transitional metal complexes (based on, for example, tungsten and niobium).

The discovery of phosphorus is credited to the German alchemist Hennig Brand in 1669, although other chemists might have discovered phosphorus around the same time. Brand experimented with urine, which contains considerable quantities of dissolved phosphates from normal metabolism. Working in Hamburg, Brand attempted to create the fabled philosopher's stone through the distillation of some salts by evaporating urine, and in the process produced a white material that glowed in the dark and burned brilliantly. It was named phosphorus mirabilis ("miraculous bearer of light"). His process originally involved letting urine stand for days until it gave off a terrible smell. Then he boiled it down to a paste, heated this paste to a high temperature, and led the vapours through water, where he hoped they would condense to gold. Instead, he obtained a white, waxy substance that glowed in the dark. Brand had discovered phosphorus, the first element discovered since antiquity. We now know that Brand produced ammonium sodium hydrogen phosphate, (NH4)NaHPO4. While the quantities were essentially correct (it took about 1,100 L of urine to make about 60 g of phosphorus), it was unnecessary to allow the urine to rot. Later scientists would discover that fresh urine yielded the same amount of phosphorus.

Since that time, phosphors and phosphorescence were used loosely to describe substances that shine in the dark without burning. However, as mentioned above, even though the term phosphorescence was originally coined as a term by analogy with the glow from oxidation of elemental phosphorus, is now reserved for another fundamentally different process—re-emission of light after illumination.

Brand at first tried to keep the method secret, but later sold the recipe for 200 thaler to D Krafft from Dresden, who could now make it as well, and toured much of Europe with it, including England, where he met with Robert Boyle. The secret that it was made from urine leaked out and first Johann Kunckel (1630–1703) in Sweden (1678) and later Boyle in London (1680) also managed to make phosphorus. Boyle states that Krafft gave him no information as to the preparation of phosphorus other than that it was derived from "somewhat that belonged to the body of man". This gave Boyle a valuable clue, however, so that he, too, managed to make phosphorus, and published the method of its manufacture. Later he improved Brand's process by using sand in the reaction (still using urine as base material).

Due to its reactivity with air and many other oxygen-containing substances, phosphorus is not found free in nature but it is widely distributed in many different minerals.

Phosphate rock, which is partially made of apatite (an impure tri-calcium phosphate mineral), is an important commercial source of this element. About 50 percent of the global phosphorus reserves are in the Arab nations. Large deposits of apatite are located in China, Russia, Morocco, Florida, Idaho, Tennessee, Utah, and elsewhere. Albright and Wilson in the United Kingdom and their Niagara Falls plant, for instance, were using phosphate rock in the 1890s and 1900s from Connetable, Tennessee and Florida; by 1950 they were using phosphate rock mainly from Tennessee and North Africa. In the early 1990s Albright and Wilson's purified wet phosphoric acid business was being adversely affected by phosphate rock sales by China and the entry of their long-standing Moroccan phosphate suppliers into the purified wet phosphoric acid business.

White phosphorus was first made commercially, for the match industry in the 19th century, by distilling off phosphorus vapour from precipitated phosphates, mixed with ground coal or charcoal, which was heated in an iron pot, in retort. The precipitated phosphates were made from ground-up bones that had been de-greased and treated with strong acids. Carbon monoxide and other flammable gases produced during the reduction process were burnt off in a flare stack.

This process became obsolete when the submerged-arc furnace for phosphorus production was introduced to reduce phosphate rock. Calcium phosphate (phosphate rock), mostly mined in Florida and North Africa, can be heated to 1,200–1,500 °C with sand, which is mostly SiO2, and coke (impure carbon) to produce vaporized tetraphosphorus, P4, (melting point 44.2 °C), which is subsequently condensed into a white powder under water to prevent oxidation. Even under water, white phosphorus is slowly converted to the more stable red phosphorus allotrope (melting point 597 °C). Both the white and red allotropes of phosphorus are insoluble in water.

The electric furnace method allowed production to increase to the point where phosphorus could be used in weapons of war. In World War I it was used in incendiaries, smoke screens and tracer bullets. A special incendiary bullet was developed to shoot at hydrogen-filled Zeppelins over Britain (hydrogen being highly inflammable if it can be ignited). During World War II, Molotov cocktails of benzene and phosphorus were distributed in Britain to specially selected civilians within the British resistance operation, for defence; and phosphorus incendiary bombs were used in war on a large scale. Burning phosphorus is difficult to extinguish and if it splashes onto human skin it has horrific effects (see precautions below).

Today phosphorus production is larger than ever. It is used as a precursor for various chemicals, in particular the herbicide glyphosate sold under the brand name Roundup. Production of white phosphorus takes place at large facilities and it is transported heated in liquid form. Some major accidents have occurred during transportation, train derailments at Brownston, Nebraska and Miamisburg, Ohio led to large fires. The worst accident in recent times was an environmental one in 1968 when phosphorus spilled into the sea from a plant at Placentia Bay, Newfoundland. Thermphos International is Europe's only producer of elemental phosphorus.

General Electric | History and definition of General Electric| Symbol of General Electric

General Electric
General Electric Company (NYSE: GE), or GE, is an American multinational conglomerate corporation incorporated in Schenectady, New York and headquartered in Fairfield, Connecticut, United States. The company operates through five segments: Energy, Technology Infrastructure, NBC Universal, Capital Finance and Consumer & Industrial. In 2011, Forbes ranked GE as the world's third largest company after JPMorgan Chase and HSBC, based on a formula that compared the total sales, profits, assets and market value of several multinational companies. The company has 287,000 employees around the world.

By 1890, Thomas Edison had brought together several of his business interests under one corporation to form Edison General Electric. At about the same time, Thomson-Houston Electric Company, under the leadership of Charles Coffin, gained access to a number of key patents through the acquisition of a number of competitors. Subsequently, General Electric was formed by the 1892 merger of Edison General Electric of Schenectady, New York and Thomson-Houston Electric Company of Lynn, Massachusetts and both plants remain in operation under the GE banner to this day. The company was incorporated in New York, with the Schenectady plant as headquarters for many years thereafter. Around the same time, General Electric's Canadian counterpart, Canadian General Electric, was formed.

In 1896, General Electric was one of the original 12 companies listed on the newly formed Dow Jones Industrial Average and still remains after 115 years, the only one remaining on the Dow (though it has not continuously been in the DOW index).

In 1911 the National Electric Lamp Association (NELA) was absorbed into General Electric's existing lighting business. GE then established its lighting division headquarters at Nela Park in East Cleveland, Ohio. Nela Park is still the headquarters for GE's lighting business. In 1935, GE was one of the top 30 companies traded at the London Stock Exchange.

GE's long history of working with turbines in the power generation field gave them the engineering know-how to move into the new field of aircraft turbosuperchargers. Led by Sanford Moss, GE introduced the first superchargers during World War I, and continued to develop them during the Interwar period. They became indispensable in the years immediately prior to World War II, and GE was the world leader in exhaust-driven supercharging when the war started. This experience, in turn, made GE a natural selection to develop the Whittle W.1 jet engine that was demonstrated in the United States in 1941. Although their early work with Whittle's designs was later handed to Allison Engine Company, GE Aviation emerged as one of the world's largest engine manufacturers, second only to the well-founded, and older, British company; Rolls-Royce plc, which led the way in innovative, reliable and efficient, high-performance, heavy-duty, jet engine design and manufacture.

In 2002 GE acquired the wind power assets of Enron during its bankruptcy proceedings. Enron Wind was the only surviving U.S. manufacturer of large wind turbines at the time, and GE increased engineering and supplies for the Wind Division and doubled the annual sales to $1.2 billion in 2003. It acquired ScanWind in 2009.

Some consumers boycotted GE light bulbs, refrigerators and other products in the 1980s and 1990s to protest GE’s role in nuclear weapons production.

GE was one of the eight major computer companies through all of the 1960s — with IBM, the largest, called "Snow White" followed by the "Seven Dwarfs": Burroughs, NCR, Control Data Corporation, Honeywell, RCA, UNIVAC and GE.

GE had an extensive line of general purpose and special purpose computers. Among them were the GE 200, GE 400, and GE 600 series general purpose computers, the GE 4010, GE 4020, and GE 4060 real time process control computers, the Datanet 30 and Datanet 355 message switching computers (Datanet 30 and 355 were also used as front end processors for GE mainframe computers).A Datanet 500 computer was designed, but never sold.

GE is a multinational conglomerate headquartered in Fairfield, Connecticut. Its New York main offices are located at 30 Rockefeller Plaza in Rockefeller Center, known as the GE Building for the prominent GE logo on the roof. NBC's headquarters and main studios are also located in the building. Through its RCA subsidiary, it has been associated with the Center since its construction in the 1930s.

The company describes itself as composed of a number of primary business units or "businesses." Each unit is itself a vast enterprise, many of which would, even as a standalone company, rank in the Fortune 500. The list of GE businesses varies over time as the result of acquisitions, divestitures and reorganizations. GE's tax return is the largest return filed in the United States; the 2005 return was approximately 24,000 pages when printed out, and 237 megabytes when submitted electronically. The company also "spends more on U.S. lobbying than any other company."

In 2005 GE launched its "Ecomagination" initiative in an attempt to position itself as a "green" company. GE is currently one of the biggest players in the wind power industry, and it is also developing new environment-friendly products such as hybrid locomotives, desalination and water reuse solutions, and photovoltaic cells. The company "plans to build the largest solar-panel-making factory in the U.S.," and has set goals for its subsidiaries to lower their greenhouse gas emissions.

On May 21, 2007, GE announced it would sell its GE Plastics division to petrochemicals manufacturer SABIC for net proceeds of $11.6 billion. The transaction took place on August 31, 2007, and the company name changed to SABIC Innovative Plastics, with Brian Gladden as CEO.

Jeffrey Immelt is the current chairman of the board and chief executive officer of GE. He was selected by GE's Board of Directors in 2000 to replace John Francis Welch Jr. (Jack Welch) following his retirement. Previously, Immelt had headed GE's Medical Systems division (now GE Healthcare) as its President and CEO. He has been with GE since 1982 and is on the board of two non-profit organizations.

His tenure as the Chairman and CEO started at a time of crisis — he took over the role on September 7, 2001 four days before the terrorist attacks on the United States, which killed two employees and cost GE's insurance business $600 million — as well as having a direct effect on the company's Aircraft Engines sector. Immelt has also been selected as one of President Obama's financial advisors concerning the economic rescue plan.

CEO Jeffrey Immelt had a set of changes in the presentation of the brand commissioned in 2004, after he took the reins as chairman, to unify the diversified businesses of GE. The changes included a new corporate color palette, small modifications to the GE Logo, a new customized font (GE Inspira), and a new slogan, "imagination at work" replacing the longtime slogan "we bring good things to life", composed by David Lucas. The standard requires many headlines to be lowercased and adds visual "white space" to documents and advertising to promote an open and approachable company. The changes were designed by Wolff Olins and are used extensively on GE's marketing, literature and website.

Through these businesses, GE participates in a wide variety of markets including the generation, transmission and distribution of electricity (e.g. nuclear, gas and solar), lighting, industrial automation, medical imaging equipment, motors, railway locomotives, aircraft jet engines, and aviation services. It co-owns NBC Universal with Comcast. Through GE Commercial Finance, GE Consumer Finance, GE Equipment Services, and GE Insurance it offers a range of financial services as well. It has a presence in over 100 countries.

Since over half of GE's revenue is derived from financial services, it is arguably a financial company with a manufacturing arm. It is also one of the largest lenders in countries other than the United States, such as Japan. Even though the first wave of conglomerates (such as ITT Corporation, Ling-Temco-Vought, Tenneco, etc.) fell by the wayside by the mid-1980s, in the late 1990s, another wave (consisting of Westinghouse, Tyco, and others) tried and failed to emulate GE's success.

During the 2011 Fukushima I Nuclear Power Plant catastrophe it became public that the six reactors in the plant had been designed by General Electric and that critics had opposed GE's design as far back as 1972.

In March 2011, The New York Times reported that despite earning $14.2 billion in worldwide profits, including more than $5 billion from U.S. operations, General Electric did not owe taxes in 2010. General Electric had a tax benefit of $3.2 billion. This same article also pointed out that despite their continually diminishing tax liability since the 1990's, GE has laid off one-fifth of their American workers since 2002.

GE was the focus of a 1991 short subject Academy Award-winning documentary, Deadly Deception: General Electric, Nuclear Weapons, and Our Environment, that juxtaposed GE's rosy "We Bring Good Things To Life" commercials with the true stories of workers and neighbors whose lives have been affected by the company's activities involving nuclear weapons.

GE was defamed on March 14, 1998, during the TV Funhouse segment of Saturday Night Live entitled "Conspiracy Theory Rock." This segment aired only once and was subsequently pulled by NBC.

GE's corporate culture and management practices are frequently lampooned in the NBC television series 30 Rock. In the first season episode "The Rural Juror", character Jack Donaghy opens a complex organization chart that depicts the ownership structure of General Electric's subsidiaries. The chart reveals that NBC is a subsidiary of Sheinhardt Wig Company, and NBC in turn owns subsidiaries not related to broadcasting or entertainment production.

Tokyo Electric Power Company | Largest electricity company in the world | History and definition of Electric Power Company | Symbol of TEPCO

Tepco
The Tokyo Electric Power Company, Incorporated , also known as Toden or TEPCO, is an electric utility servicing Japan's Kantō region, Yamanashi Prefecture, and the eastern portion of Shizuoka Prefecture. This area includes Tokyo. Its headquarters are located in Uchisaiwaicho, Chiyoda, Tokyo, and international branch offices exist in Washington, D.C., and London.

Tepco is the fourth largest electric power company in the world (1st: E.ON, 2nd: Électricité de France, 3rd: RWE), and the largest to hail from Asia. The amount of electricity it sells annually is the same as the amount Italy uses in a year. Tepco has one-third of the Japanese electric market. Tepco is the largest of the 10 electric utilities in Japan.

In 2007 Tepco was forced to shut the Kashiwazaki-Kariwa Nuclear Power Plant after the Niigata-Chuetsu-Oki Earthquake. That year it posted its first loss in 28 years. Corporate losses continued until the plant reopened in 2009. Following the March 2011 Tōhoku earthquake and tsunami, its power plant at Fukushima Daiichi was the site of a continuing nuclear disaster, one of the world's most serious. TEPCO could face ¥2 trillion ($23.6 billion) in special losses in the current business year to March 2012, and Japan plans to put TEPCO under effective state control as a guarantee for compensation payments to people affected by radiation.

Japan's ten regional electric companies, including TEPCO, were established in 1951 with the end of the state-run electric industry regime for national wartime mobilization.

In the 1950s, the company's primary goal was to facilitate a rapid recovery from the infrastructure devastation of World War II. After the recovery period, the company had to expand its supply capacity to catch up with the country's rapid economic growth by developing fossil fuel power plants and a more efficient transmission network.

In the 1960s and 1970s, the company faced the challenges of increased environmental pollution and oil shocks. TEPCO began addressing environmental concerns through expansion of its LNG fueled power plant network as well as greater reliance on nuclear generation. The first nuclear unit at the Fukushima Dai-ichi (Fukushima I) nuclear power plant began operational generation on March 26, 1970.

During the 1980s and 1990s, the widespread use of air-conditioners and IT/OA appliances resulted a gap between day and night electricity demand. In order to reduce surplus generation capacity and increase capacity utilization, TEPCO developed pumped storage hydroelectric power plants and promoted thermal storage units.

Recently, TEPCO is expected to play a key role in achieving Japan's targets for reduced carbon dioxide emissions under the Kyoto Protocol. It also faces difficulties related to the trend towards deregulation in Japan's electric industry as well as low power demand growth. In light of these circumstances, TEPCO launched an extensive sales promotion campaign called 'Switch!', promoting all-electric housing in order to both achieve the more efficient use of its generation capacity as well as erode the market share of gas companies.

The company's power generation consists of two main networks. Fossil fuel power plants around Tokyo Bay are used for peak load supply and nuclear reactors in Fukushima and Niigata Prefecture provide base load supply. Additionally, hydroelectric plants in the mountainous areas outside the Kanto Plain, despite their relatively small capacity compared to fossil fuel and nuclear generation, remain important in providing peak load supply. The company also purchases electricity from other regional or wholesale electric power companies like Tohoku Electric Power Co., J-POWER, and Japan Atomic Power Company.

The company has built a radiated and circular grid between power plants and urban/industrial demand areas. Each transmission line is designed to transmit electricity at high-voltage (66-500kV) between power plants and substations. Normally transmission lines are strung between towers, but within the Tokyo metropolitan area high-voltage lines are located underground.

From substations, electricity is transmitted via the distribution grid at low-voltage (22-6kV). For high-voltage supply to large buildings and factories, distribution lines are directly connected to customers' electricity systems. In this case, customers must purchase and set up transformers and other equipment to run electric appliances. For low voltage supply to houses and small shops, distribution lines are first connected to the company's transformers (seen on utility poles and utility boxes), converted to 100/200V, and finally connected to end users.

Under normal conditions, TEPCO's transmission and distribution infrastructure is notable as one of the most reliable electricity networks in the world. Blackout frequency and average recovery time compares favorably with other electric companies in Japan as well as within other developed countries. The company instituted its first-ever rolling blackouts following the shutdown of the Fukushima I and II plants which were close to the epicenter of the March 2011 earthquake. For example on the morning of Tuesday, March 15, 2011, 700,000 households had no power for three hours. The company had to deal with a 10 million kW gap between demand and production on March 14, 2011.

On August 29, 2002, the government of Japan revealed that TEPCO was guilty of false reporting in routine governmental inspection of its nuclear plants and systematic concealment of plant safety incidents. All seventeen of its boiling-water reactors were shut down for inspection as a result. TEPCO's chairman Hiroshi Araki, President Nobuya Minami, Vice-President Toshiaki Enomoto, as well as the advisers Shō Nasu and Gaishi Hiraiwa stepped-down by September 30, 2002. The utility "eventually admitted to two hundred occasions over more than two decades between 1977 and 2002, involving the submission of false technical data to authorities". Upon taking over leadership responsibilities, TEPCO's new president issued a public commitment that the company would take all the countermeasures necessary to prevent fraud and restore the nation's confidence. By the end of 2005, generation at suspended plants had been restarted, with government approval.

In 2007, however, the company announced to the public that an internal investigation had revealed a large number of unreported incidents. These included an unexpected unit criticality in 1978 and additional systematic false reporting, which had not been uncovered during the 2002 inquiry. Along with scandals at other Japanese electric companies, this failure to ensure corporate compliance resulted in strong public criticism of Japan's electric power industry and the nation's nuclear energy policy. Again, the company made no effort to identify those responsible.

On 11 March 2011 several nuclear reactors in Japan were badly damaged by the 2011 Tōhoku earthquake and tsunami.

The Tōkai Nuclear Power Plant lost external electric power, experienced the failure of one of its two cooling pumps, and two of its three emergency power generators. External electric power could only be restored two days after the earthquake.

The Japanese government declared an “atomic power emergency” and evacuated thousands of residents living close to TEPCO's Fukushima I plant. Reactors 4, 5 and 6 had been shut down prior to the earthquake for planned maintenance. The remaining reactors were shut down automatically after the earthquake, but the subsequent tsunami flooded the plant, knocking out emergency generators needed to run pumps which cool and control the reactors. The flooding and earthquake damage prevented assistance being brought from elsewhere. Over the following days there was evidence of partial nuclear meltdowns in reactors 1, 2 and 3; hydrogen explosions destroyed the upper cladding of the building housing reactors 1 and 3; an explosion damaged reactor 2's containment; and severe fires broke out at reactor 4.

The Japanese authorities rated the events at reactors 1, 2 and 3 as a level 5 (Accident With Wider Consequences) on the International Nuclear Event Scale, while the events at reactor 4 were placed at level 3 (Serious Incident). The situation as a whole was rated as level 7 (Major Accident). On 20 March, Japan's chief cabinet secretary Yukio Edano "confirmed for the first time that the nuclear complex — with heavy damage to reactors and buildings and with radioactive contamination throughout — would be closed once the crisis was over." At the same time, questions are being asked, looking back, about whether company management waited too long before pumping seawater into the plant, a measure that would ruin and has now ruined the reactors; and, looking forward, "whether time is working for or against the workers and soldiers struggling to re-establish cooling at the crippled plant." One report noted that defense minister, Toshimi Kitazawa, on 21 March had committed "military firefighters to spray water around the clock on an overheated storage pool at Reactor No. 3." The report concluded with "a senior nuclear executive who insisted on anonymity but has many contacts in Japan sa[ying that] ... caution ... [as] plant operators have been struggling to reduce workers’ risk ... had increased the risk of a serious accident. He suggested that Japan’s military assume primary responsibility. 'It’s the same trade-off you have to make in war, and that is the sacrifice of a few for the safety of many,' he said. 'But a corporation just cannot do that.'"

There has been considerable criticism to the way TEPCO handled the crisis. It was reported that seawater was used only after Prime Minister Naoto Kan ordered it following an explosion at one reactor the evening of 12 March, though executives had started considering it that morning. Tepco didn't begin using seawater at other reactors until 13 March. Referring to that same early decision-making sequence, "Michael Friedlander, a former senior operator at a Pennsylvania power plant with General Electric reactors similar to the troubled ones in Japan, said the crucial question is whether Japanese officials followed G.E.’s emergency operating procedures." Kuni Yogo, a former atomic energy policy planner in Japan’s Science and Technology Agency and Akira Omoto, a former Tepco executive and a member of the Japanese Atomic Energy Commission both questioned Tepco's management's decisions in the crisis. Kazuma Yokota, a safety inspector with Japan's Nuclear and Industrial Safety Agency, or NISA, was at Fukushima I at the time of the earthquake and tsunami and provided details of the early progression of the crisis.

Oil sands | Undestanding and definition of Oil Sands | State Oil Sands produces the largest

Bituminous sands, colloquially known as oil sands or tar sands, are a type of unconventional petroleum deposit. The sands contain naturally occurring mixtures of sand, clay, water, and a dense and extremely viscous form of petroleum technically referred to as bitumen (or colloquially "tar" due to its similar appearance, odour, and colour). Oil sands are found in large amounts in many countries throughout the world, but are found in extremely large quantities in Canada and Venezuela.

The crude bitumen contained in the Canadian oil sands is described by Canadian authorities as "petroleum that exists in the semi-solid or solid phase in natural deposits. Bitumen is a thick, sticky form of crude oil, so heavy and viscous (thick) that it will not flow unless heated or diluted with lighter hydrocarbons. At room temperature, it is much like cold molasses". Venezuelan authorities often refer to similar types of crude oil as extra-heavy oil, because Venezuelan reservoirs are warmer and the oil is somewhat less viscous, allowing it to flow more easily.

Oil sands reserves have only recently been considered to be part of the world's oil reserves, as higher oil prices and new technology enable them to be profitably extracted and upgraded to usable products. They are often referred to as unconventional oil or crude bitumen, in order to distinguish the bitumen extracted from oil sands from the free-flowing hydrocarbon mixtures known as crude oil traditionally produced from oil wells.

Making liquid fuels from oil sands requires energy for steam injection and refining. This process generates two to four times the amount of greenhouse gases per barrel of final product as the production of conventional oil. If combustion of the final products is included, the so-called "Well to Wheels" approach, oil sands extraction, upgrade and use emits 10 to 45% more greenhouse gases than conventional crude.

The exploitation of bituminous deposits and seeps dates back to paleolithic times. The earliest known use of bitumen was by Neanderthals, some 40,000 years ago. Bitumen has been found adhering to stone tools used by Neanderthals at sites in Syria. After the arrival of Homo sapiens, humans used bitumen for construction of buildings and water proofing of reed boats, among other uses. In ancient Egypt, the use of bitumen was important in creating Egyptian mummies—in fact, the word mummy is derived from the Arab word mūmiyyah, which means bitumen.

In ancient times, bitumen was primarily a Mesopotamian commodity used by the Sumerians and Babylonians, although it was also found in the Levant and Persia. Along the Tigris and Euphrates rivers, the area was littered with hundreds of pure bitumen seepages. The Mesopotamians used the bitumen for waterproofing boats and buildings. In North America, the early European fur traders found Canadian First Nations using bitumen from the vast Athabasca oil sands to waterproof their birch bark canoes. In Europe, they were extensively mined near the European city of Pechelbronn, where the vapor separation process was in use in 1742.

The name tar sands was applied to bituminous sands in the late 19th and early 20th century. People who saw the bituminous sands during this period were familiar with the large amounts of tar residue produced in urban areas as a by-product of the manufacture of coal gas for urban heating and lighting. The word "tar" to describe these natural bitumen deposits is really a misnomer, since, chemically speaking, tar is a man-made substance produced by the destructive distillation of organic material, usually coal. Since then, coal gas has almost completely been replaced by natural gas as a fuel, and coal tar as a material for paving roads has been replaced by the petroleum product asphalt. Naturally occurring bitumen is chemically more similar to asphalt than to tar, and the term oil sands (or oilsands) is more commonly used in the producing areas than tar sands because synthetic oil is what is manufactured from the bitumen.

Oil sands are now considered a serious alternative to conventional crude oil, since crude oil is becoming scarcer. Oil sands and oil shale have the potential to generate oil for centuries.

Many countries in the world have large deposits of oil sands, including the United States, Russia, and various countries in the Middle East. However, the world's largest deposits occur in two countries: Canada and Venezuela, each of which have oil sand reserves approximately equal to the world's total reserves of conventional crude oil. As a result of the development of Canadian oil sands reserves, 44% of Canadian oil production in 2007 was from oil sands, with an additional 18% being heavy crude oil, while light oil and condensate had declined to 38% of the total. Because growth of oil sands production has exceeded declines in conventional crude oil production, Canada has become the largest supplier of oil and refined products to the United States, ahead of Saudi Arabia and Mexico. Venezuelan production is also very large, but due to political problems within its national oil company, estimates of its production data are not reliable. Outside analysts believe Venezuela's oil production has declined in recent years, though there is much debate on whether this decline is depletion-related or not.

Bituminous sands are a major source of unconventional oil. Conventional crude oil is normally extracted from the ground by drilling oil wells into a petroleum reservoir, allowing oil to flow into them under natural reservoir pressures, although artificial lift and techniques such as water flooding and gas injection are usually required to maintain production as reservoir pressure drops toward the end of a field's life. Because extra-heavy oil and bitumen flow very slowly, if at all, toward producing wells under normal reservoir conditions, the sands must be extracted by strip mining or the oil made to flow into wells by in situ techniques, which reduce the viscosity by injecting steam, solvents, and/or hot air into the sands. These processes can use more water and require larger amounts of energy than conventional oil extraction, although many conventional oil fields also require large amounts of water and energy to achieve good rates of production.

This is because heavy crude feedstock needs pre-processing before it is fit for conventional refineries. This pre-processing is called 'upgrading', the key components of which are as follows:
  • removal of water, sand, physical waste, and lighter products
  • catalytic purification by hydrodemetallisation (HDM), hydrodesulfurization (HDS) and hydrodenitrogenation (HDN)
  • hydrogenation through carbon rejection or catalytic hydrocracking (HCR)
As carbon rejection is very inefficient and wasteful in most cases, catalytic hydrocracking is preferred in most cases. All these processes take large amounts of energy and water, while emitting more carbon dioxide than conventional oil.

Catalytic purification and hydrocracking are together known as hydroprocessing. The big challenge in hydroprocessing is to deal with the impurities found in heavy crude, as they poison the catalysts over time. Many efforts have been made to deal with this to ensure high activity and long life of a catalyst. Catalyst materials and pore size distributions are key parameters that need to be optimized to deal with these challenge and this varies from place to place, depending on the kind of feedstock present.

At the present time, only Canada has a large-scale commercial oil sands industry, though a small amount of oil from oil sands is produced in Venezuela. Because of increasing oil sands production, Canada has become the largest single supplier of oil and products to the United States. Oil sands now are the source of almost half of Canada's oil production, although due to the 2008 economic downturn work on new projects has been deferred, while Venezuelan production has been declining in recent years. Oil is not produced from oil sands on a significant level in other countries.

Since Great Canadian Oil Sands (now Suncor) started operation of its mine in 1967, bitumen has been extracted on a commercial scale from the Athabasca Oil Sands by surface mining. In the Athabasca sands there are very large amounts of bitumen covered by little overburden, making surface mining the most efficient method of extracting it. The overburden consists of water-laden muskeg (peat bog) over top of clay and barren sand. The oil sands themselves are typically 40 to 60 metres deep, sitting on top of flat limestone rock. Originally, the sands were mined with draglines and bucket-wheel excavators and moved to the processing plants by conveyor belts. In recent years companies such as Syncrude and Suncor have switched to much cheaper shovel-and-truck operations using the biggest power shovels (100 or more tons) and dump trucks (400 tons) in the world. This has held production costs to around $27 per barrel of synthetic crude oil despite rising energy and labour costs.

After excavation, hot water and caustic soda (NaOH) is added to the sand, and the resulting slurry is piped to the extraction plant where it is agitated and the oil skimmed from the top. Provided that the water chemistry is appropriate to allow bitumen to separate from sand and clay, the combination of hot water and agitation releases bitumen from the oil sand, and allows small air bubbles to attach to the bitumen droplets. The bitumen froth floats to the top of separation vessels, and is further treated to remove residual water and fine solids. Bitumen is much thicker than traditional crude oil, so it must be either mixed with lighter petroleum (either liquid or gas) or chemically split before it can be transported by pipeline for upgrading into synthetic crude oil.

The bitumen is then transported and eventually upgraded into synthetic crude oil. About two tons of oil sands are required to produce one barrel (roughly 1/8 of a ton) of oil. Originally, roughly 75% of the bitumen was recovered from the sand. However, recent enhancements to this method include Tailings Oil Recovery (TOR) units which recover oil from the tailings, Diluent Recovery Units to recover naptha from the froth, Inclined Plate Settlers (IPS) and disc centrifuges. These allow the extraction plants to recover well over 90% of the bitumen in the sand. After oil extraction, the spent sand and other materials are then returned to the mine, which is eventually reclaimed.

Alberta Taciuk Process technology extracts bitumen from oil sands through a dry-retorting. During this process, oil sand is moved through a rotating drum, cracking the bitumen with heat and producing lighter hydrocarbons. Although tested, this technology is not in commercial use yet.

Four oil sands mines are currently in operation and two more (Jackpine and Kearl) are in the initial stages of development. The original Suncor mine opened in 1967, while the Syncrude mine started in 1978, Shell Canada opened its Muskeg River mine (Albian Sands) in 2003 and Canadian Natural Resources Ltd opened its Horizon Project in 2009. New mines under construction or undergoing approval include Shell Canada's, Imperial Oil's Kearl Oil Sands Project, Synenco Energy's, and Suncor's.

It is estimated that approximately 90% of the Alberta oil sands and nearly all of Venezuelan sands are too far below the surface to use open-pit mining. Several in-situ techniques have been developed to extract this oil.
 
Kansas-City-Star-asbaquez © 2010 | Designed by Chica Blogger | Back to top