Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Linksys : History and definition of the Linksys

Linksys by Cisco, commonly known as Linksys, is a brand of home and small office networking products now produced by Cisco Systems, though once a separate company founded in 1995 before being acquired by Cisco in 2003. Products currently and previously sold under the Linksys brand name include broadband and wireless routers, consumer and small business grade Ethernet switching, VoIP equipment, wireless internet video camera, AV products, network storage systems, and other products. Linksys products were widely available in North America off-the-shelf from both consumer electronics stores (CompUSA and Best Buy), internet retailers, and big-box retail stores (WalMart). Linksys' significant competition as an independent firm were D-Link and NetGear, the latter for a time being a brand of Cisco competitor Nortel.

In 2007, Cisco CEO John Chambers described the longterm plan to kill the independent Linksys brand: "It will all come over time into a Cisco brand. The reason we kept Linksys' brand because it was better known in the US than even Cisco was for the consumer. As you go globally there's very little advantage in that." From 2008, all Linksys products sold were packaged and branded as "Linksys by Cisco"; some former Linksys products were merged into the "Valet" brand (albeit with a large Cisco logo and smaller Linksys name still on the product). The formerly-independent Linksys website presently redirects to Cisco's. Small-business inquiries into former Linksys products are directed to Cisco's products and reseller network.

Linksys was founded in 1988 in a garage in Irvine, California. The founders, Janie and Victor Tsao (who received a master's degree in computer science from Illinois Institute of Technology in 1980), were immigrants from Taiwan who held second jobs as consultants specializing in pairing American technology vendors with manufacturers in Taiwan. The company's first products were printer sharers that connected multiple PCs to printers. From this it expanded into Ethernet hubs, network cards, and cords. By 1994, it had grown to 55 employees with annual revenues of $6.5 million.

The company received a major boost in 1995, when Microsoft released Windows 95 with built-in networking functions that expanded the market for its products. Linksys established its first U.S. retail channels with Fry's Electronics (1995) and Best Buy (1996). In 1999, the company announced the first Fast Ethernet PCMCIA Card for notebook PCs. In 2000, it introduced the first 8-port router with SNMP and QoS, and in 2001 it shipped its millionth cable/DSL router. By 2003, when the company was acquired by Cisco, it had 305 employees and revenues of more than $500 million.

Cisco continued to invest to expand the company's product line. In April 2005, Cisco acquired VoIP maker Sipura Technology and made it part of the Linksys division. For a time, VoIP products based on Sipura technology were offered under the Linksys Voice System brand. (They are now sold by Cisco as part of the Linksys Business Series.) In July 2008, Cisco acquired Seattle-based Pure Networks, a vendor of home networking-management software. Pure Networks had previously provided the tools and software infrastructure used to create the Linksys Easy Link Advisor. Pure Networks was integrated into Linksys, with employees remaining in Seattle and continuing to work on making it easier for users to set up and manage home networks.

WAG200G has a 211 MHz AR7 MIPS32 CPU with 4 MB of flash memory and 16MB of DRam on the PCB. The WAG200G measures 5.5×5.5×1.25 inches (14×14×3.2 cm) (W×H×D) and weighs .77 pounds (.35 kg). The WAG200G all-in-one device functions as a high speed ADSL2+ Modem, a Wireless G Access Point, router and 4-port Ethernet switch. The built-in wireless Access Point function complies with the specifications of the 802.11g standard, which offers transfer speeds of up to 54 Mbit/s. It is also backwards compatible with 802.11b devices at speeds of 11 Mbit/s. The Access Point can support the connection of up to 32 wireless devices. It also offers 4 built-in 10/100 RJ-45 ports to connect Ethernet-enabled computers, print servers and other devices

The NSLU2 is a network attached storage device with 8 MB of flash memory, 32MB of SDRAM, a 100Mb Ethernet port, and two USB ports. The NSLU2 was discontinued in 2008, but is still in demand because of the numerous enhancements developed by open-source community projects. The NAS200 added SATA ports.

The Media Hub 300 and 400 series are network attached storage devices that allow users to share digital media across a network. Once the Media Hub is connected to the network, it searches for media content residing within the network and aggregates it into one centralized location, including all UPnP devices found. The Built-in Media Reader can directly import photos from compact Flash devices, SD cards and memory sticks without the need of a computer. Memory capacity options are 500GB or 1TB, with an extra empty bay.

The Media Hub's GUI gives a holistic view of the media located on the network regardless of where the actual file is located. Albums are consolidated, artwork, track numbers, and other metadata are downloaded, and all information can be sorted by a variety of different criteria. Included is Automated Backup Software that helps preserve the data through continuous storage backup.

intel itanium | Understanding and definition of the Intel Itanium | The latest product from Intel Itanium | latest update of the Intel Itanium

Itanium is a family of 64-bit Intel microprocessors that implement the Intel Itanium architecture (formerly called IA-64). Intel markets the processors for enterprise servers and high-performance computing systems. The architecture originated at Hewlett-Packard (HP), and was later jointly developed by HP and Intel.

The Itanium architecture is based on explicit instruction-level parallelism, in which the compiler decides which instructions to execute in parallel. This contrasts with other superscalar architectures, which depend on the processor to manage instruction dependencies at runtime. Itanium cores up to and including Tukwila execute up to six instructions per clock cycle. The first Itanium processor, codenamed Merced, was released in 2001.

Itanium-based systems have been produced by HP (the HP Integrity Servers line) and several other manufacturers. As of 2008[update], Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, IBM POWER, and SPARC. The most recent processor, Tukwila, originally planned for release in 2007, was released on February 8, 2010.

By the time Itanium was released in June 2001, its performance was not superior to competing RISC and CISC processors. Itanium competed at the low-end (primarily 4-CPU and smaller systems) with servers based on x86 processors, and at the high end with IBM's POWER architecture and Sun Microsystems' SPARC architecture. Intel repositioned Itanium to focus on high-end business and HPC computing, attempting to duplicate x86's successful "horizontal" market (i.e., single architecture, multiple systems vendors). The success of this initial processor version was limited to replacing PA-RISC in HP systems, Alpha in Compaq systems and MIPS in SGI systems, though IBM also delivered a supercomputer based on this processor. POWER and SPARC remained strong, while the 32-bit x86 architecture continued to grow into the enterprise space. With economies of scale fueled by its enormous installed base, x86 has remained the preeminent "horizontal" architecture in enterprise computing.

Only a few thousand systems using the original Merced Itanium processor were sold, due to relatively poor performance, high cost and limited software availability. Recognizing that the lack of software could be a serious problem for the future, Intel made thousands of these early systems available to independent software vendors (ISVs) to stimulate development. HP and Intel brought the next-generation Itanium 2 processor to market a year later.

The Itanium 2 processor was released in 2002, and was marketed for enterprise servers rather than for the whole gamut of high-end computing. The first Itanium 2, code-named McKinley, was jointly developed by HP and Intel. It relieved many of the performance problems of the original Itanium processor, which were mostly caused by an inefficient memory subsystem. McKinley contained 221 million transistors (of which 25 million were for logic), measured 19.5 mm by 21.6 mm (421 mm2) and was fabricated in a 180 nm, bulk CMOS process with six layers of aluminium metallization.

In 2003, AMD released the Opteron, which implemented its 64-bit architecture (x86-64). Opteron gained rapid acceptance in the enterprise server space because it provided an easy upgrade from x86. Intel responded by implementing x86-64 in its Xeon microprocessors in 2004.

Intel released a new Itanium 2 family member, codenamed Madison, in 2003. Madison used a 130 nm process and was the basis of all new Itanium processors until Montecito was released in June 2006.

In March 2005, Intel announced that it was working on a new Itanium processor, codenamed Tukwila, to be released in 2007. Tukwila would have four processor cores and would replace the Itanium bus with a new Common System Interface, which would also be used by a new Xeon processor. Later that year, Intel revised Tukwila's delivery date to late 2008.

In November 2005, the major Itanium server manufacturers joined with Intel and a number of software vendors to form the Itanium Solutions Alliance to promote the architecture and accelerate software porting. The Alliance announced that its members would invest $10 billion in Itanium solutions by the end of the decade.

In 2006, Intel delivered Montecito (marketed as the Itanium 2 9000 series), a dual-core processor that roughly doubled performance and decreased energy consumption by about 20 percent.

Intel released the Itanium 2 9100 series, codenamed Montvale, in November 2007. In May 2009 the schedule for Tukwila, its follow-on, was revised again, with release to OEMs planned for the first quarter of 2010.

The Itanium 9300 series processor, codenamed Tukwila, was released on 8 February 2010 with greater performance and memory capacity.

The device uses a 65 nm process, includes two to four cores, up to 24 MB on-die caches, Hyper-Threading technology and integrated memory controllers. It implements double-device data correction, which helps to fix memory errors. Tukwila also implements Intel QuickPath Interconnect (QPI) to replace the Itanium bus-based architecture. It has a peak interprocessor bandwidth of 96 GB/s and a peak memory bandwidth of 34 GB/s. With QuickPath, the processor has integrated memory controllers and interfaces the memory directly, using QPI interfaces to directly connect to other processors and I/O hubs. QuickPath is also used on Intel processors using the Nehalem microarchitecture, making it probable that Tukwila and Nehalem will be able to use the same chipsets. Tukwila incorporates four memory controllers, each of which supports multiple DDR3 DIMMs via a separate memory controller, much like the Nehalem-based Xeon processor code-named Beckton.

Intel has extensively documented the Itanium instruction set and microarchitecture, and the technical press has provided overviews. The architecture has been renamed several times during its history. HP originally called it PA-WideWord. Intel later called it IA-64, then Itanium Processor Architecture (IPA), before settling on Intel Itanium Architecture, but it is still widely referred to as IA-64.

It is a 64-bit register-rich explicitly-parallel architecture. The base data word is 64 bits, byte-addressable. The logical address space is 264 bytes. The architecture implements predication, speculation, and branch prediction. It uses a hardware register renaming mechanism rather than simple register windowing for parameter passing. The same mechanism is also used to permit parallel execution of loops. Speculation, prediction, predication, and renaming are under control of the compiler: each instruction word includes extra bits for this. This approach is the distinguishing characteristic of the architecture.

The architecture implements 128 integer registers, 128 floating point registers, 64 one-bit predicates, and eight branch registers. The floating point registers are 82 bits long to preserve precision for intermediate results.

Each 128-bit instruction word contains three instructions, and the fetch mechanism can read up to two instruction words per clock from the L1 cache into the pipeline. When the compiler can take maximum advantage of this, the processor can execute six instructions per clock cycle. The processor has thirty functional execution units in eleven groups. Each unit can execute a particular subset of the instruction set, and each unit executes at a rate of one instruction per cycle unless execution stalls waiting for data. While not all units in a group execute identical subsets of the instruction set, common instructions can be executed in multiple units.

The execution unit groups include:
  1. Six general-purpose ALUs, two integer units, one shift unit
  2. Four data cache units
  3. Six multimedia units, two parallel shift units, one parallel multiply, one population count
  4. Two 82-bit floating-point multiply-accumulate units, two SIMD floating-point multiply-accumulate units (two 32-bit operations each)
  5. Three branch units
The compiler can often group instructions into sets of six that can execute at the same time. Since the floating-point units implement a multiply-accumulate operation, a single floating point instruction can perform the work of two instructions when the application requires a multiply followed by an add: this is very common in scientific processing. When it occurs, the processor can execute four FLOPs per cycle. For example, the 800 MHz Itanium had a theoretical rating of 3.2 GFLOPS and the fastest Itanium 2, at 1.67 GHz, was rated at 6.67 GFLOPS.

From 2002 to 2006, Itanium 2 processors shared a common cache hierarchy. They had 16 kB of Level 1 instruction cache and 16 kB of Level 1 data cache. The L2 cache was unified (both instruction and data) and is 256 kB. The Level 3 cache was also unified and varied in size from 1.5 MB to 24 MB. The 256 kB L2 cache contains sufficient logic to handle semaphore operations without disturbing the main arithmetic logic unit (ALU).

Main memory is accessed through a bus to an off-chip chipset. The Itanium 2 bus was initially called the McKinley bus, but is now usually referred to as the Itanium bus. The speed of the bus has increased steadily with new processor releases. The bus transfers 2×128 bits per clock cycle, so the 200 MHz McKinley bus transferred 6.4 GB/s, and the 533 MHz Montecito bus transfers 17.056 GB/s

Itanium processors released prior to 2006 had hardware support for the IA-32 architecture to permit support for legacy server applications, but performance for IA-32 code was much worse than for native code and also worse than the performance of contemporaneous x86 processors. In 2005, Intel developed the IA-32 Execution Layer (IA-32 EL), a software emulator that provides better performance. With Montecito, Intel therefore eliminated hardware support for IA-32 code.

In 2006, with the release of Montecito, Intel made a number of enhancements to the basic processor architecture including:
  1. Hardware multithreading: Each processor core maintains context for two threads of execution. When one thread stalls during memory access, the other thread can execute. Intel calls this "coarse multithreading" to distinguish it from the "hyper-threading technology" Intel integrated into some x86 and x86-64 microprocessors. Coarse multithreading is well matched to the Intel Itanium Architecture and results in an appreciable performance gain.
  2. Hardware support for virtualization: Intel added Intel Virtualization Technology (Intel VT-i), which provides hardware assists for core virtualization functions. Virtualization allows a software "hypervisor" to run multiple operating system instances on the processor concurrently.
  3. Cache enhancements: Montecito added a split L2 cache, which included a dedicated 1 MB L2 cache for instructions. The original 256 kB L2 cache was converted to a dedicated data cache. Montecito also included up to 12 MB of on-die L3 cache.
As of 2009[update] several manufacturers offer Itanium systems, including HP, SGI, NEC, Fujitsu, Hitachi, and Groupe Bull. In addition, Intel offers a chassis that can be used by system integrators to build Itanium systems. HP, the only one of the industry's top four server manufacturers to offer Itanium-based systems today, manufactures at least 80% of all Itanium systems. HP sold 7200 systems in the first quarter of 2006. The bulk of systems sold are enterprise servers and machines for large-scale technical computing, with an average selling price per system in excess of US$200,000. A typical system uses eight or more Itanium processors.

The Itanium bus interfaces to the rest of the system via a chipset. Enterprise server manufacturers differentiate their systems by designing and developing chipsets that interface the processor to memory, interconnections, and peripheral controllers. The chipset is the heart of the system-level architecture for each system design. Development of a chipset costs tens of millions of dollars and represents a major commitment to the use of the Itanium. IBM created a chipset in 2003, and Intel in 2002, but neither of them has developed chipsets to support newer technologies such as DDR2 or PCI Express. Currently, modern chipsets for Itanium supporting such technologies are manufactured by HP, Fujitsu, SGI, NEC, and Hitachi.

The "Tukwila" Itanium processor model has been designed to share a common chipset with the Intel Xeon processor EX (Intel’s Xeon processor designed for four processor and larger servers). The goal is to streamline system development and reduce costs for server OEMs, many of whom develop both Itanium- and Xeon-based servers.

As of 2010[update], Itanium is supported by the following operating systems:
  1. Windows Server 2003 and Windows Server 2008
  2. HP-UX 11i
  3. OpenVMS I64
  4. NonStop OS
  5. multiple GNU/Linux distributions (including Debian, Ubuntu, Gentoo, Red Hat and Novell SuSE)
  6. FreeBSD/ia64
However, Microsoft announced in 2010 that Windows Server 2008 R2 will be the last version of Windows Server to support the Itanium, and that it would also discontinue development of the Itanium versions of Visual Studio and SQL Server. Likewise, Red Hat Enterprise Linux 5 was the last Itanium edition of Red Hat Enterprise Linux and Canonical's Ubuntu 10.04 LTS was the last supported Ubuntu release on Itanium. HP will not be supporting or certifying Linux on Itanium 9300 (Tukwila) servers.

Oracle Corporation announced in March 2011 that it would drop development of application software for Itanium platforms, with the explanation that "Intel management made it clear that their strategic focus is on their x86 microprocessor and that Itanium was nearing the end of its life."

HP sells a virtualization technology for Itanium called Integrity Virtual Machines.

To allow more software to run on the Itanium, Intel supported the development of compilers optimized for the platform, especially its own suite of compilers. Starting in November 2010, with the introduction of new product suites, the Intel Itanium Compilers were no longer bundled with the Intel x86 compilers in a single product. Intel offers Itanium tools and Intel x86 tools, including compilers, independently in different product bundles. GCC, Open64 and MS Visual Studio 2005 (and later) are also able to produce machine code for Itanium. According to the Itanium Solutions Alliance over 13,000 applications were available for Itanium based systems in early 2008, though Sun has contested Itanium application counts in the past. The ISA also supports Gelato, an Itanium HPC user group and developer community that ports and supports open source software for Itanium.

Emulation is a technique that allows a computer to execute binary code that was compiled for a different type of computer. Before IBM's acquisition of QuickTransit in 2009, application binary software for IRIX/MIPS and Solaris/SPARC could run via type of emulation called "dynamic binary translation" on Linux/Itanium. Similarly, HP implemented a method to execute PA-RISC/HP-UX on the Itanium/HP-UX via emulation, to simplify migration of its PA-RISC customers to the radically-different Itanium instruction set. Itanium processors can also run the mainframe environment GCOS from Groupe Bull and several IA-32 operating systems via Instruction Set Simulators.

Itanium is aimed at the enterprise server and high-performance computing (HPC) markets. Other enterprise- and HPC-focused processor lines include Sun Microsystems' SPARC T3, Fujitsu's SPARC64 VII+ and IBM's POWER7. Measured by quantity sold, Itanium's most serious competition comes not from other enterprise architectures but from x86-64 processors including Intel's own Xeon line and AMD's Opteron line. As of 2009[update], most servers were being shipped with x86-64 processors.

In 2005, Itanium systems accounted for about 14% of HPC systems revenue, but the percentage has declined as the industry shifts to x86-64 clusters for this application.

An October 2008 paper by Gartner on the Tukwila processor stated that "...the future roadmap for Itanium looks as strong as that of any RISC peer like Power or SPARC."

An Itanium-based computer first appeared on list of the TOP500 supercomputers in November 2001. The best position ever achieved by an Itanium 2 based system in the list was #2, achieved in June 2004, when Thunder (LLNL) entered the list with an Rmax of 19.94 Teraflops. In November 2004, Columbia entered the list at #2 with 51.8 Teraflops, and there was at least one Itanium-based computer in the top 10 from then until June 2007. The peak number of Itanium-based machines on the list occurred in the November 2004 list, at 84 systems (16.8%); by June 2010, this had dropped to five systems (1%).

The Itanium processors show a progression in capability. Merced was a proof of concept. McKinley dramatically improved the memory hierarchy and allowed Itanium to become reasonably competitive. Madison, with the shift to a 130 nm process, allowed for enough cache space to overcome the major performance bottlenecks. Montecito, with a 90 nm process, allowed for a dual-core implementation and a major improvement in performance per watt. Montvale added three new features: core-level lockstep, demand-based switching and front-side bus frequency of up to 667 MHz.

At ISSCC 2011, Intel presented a paper called, "A 32nm 3.1 Billion Transistor 12-Wide-Issue Itanium Processor for Mission Critical Servers." Given Intel's history of disclosing details about Itanium microprocessors at ISSCC, this paper most likely refers to Poulson. It disclosed that it (presumably Poulson) will be a 12-wide issue processor implemented with 3.1 billion transistors. Analyst David Kanter speculates that Poulson will use a new microarchitecture, with a more advanced form of multi-threading that uses as many as four thread, to improve performance for single threaded and multi-threaded workloads.

Digital Camera | The latest of products Digital Camera | Understanding and definition of the Digital Camera

A digital camera (or digicam) is a camera that takes video or still photographs, or both, digitally by recording images via an electronic image sensor. It is the main device used in the field of digital photography. Most 21st century cameras are digital. Front and back of Canon PowerShot A95

Digital cameras can do things film cameras cannot: displaying images on a screen immediately after they are recorded, storing thousands of images on a single small memory device, and deleting images to free storage space. The majority, including most compact cameras, can record moving video with sound as well as still photographs. Some can crop and stitch pictures and perform other elementary image editing. Some have a GPS receiver built in, and can produce Geotagged photographs.

The optical system works the same as in film cameras, typically using a lens with a variable diaphragm to focus light onto an image pickup device. The diaphragm and shutter admit the correct amount of light to the imager, just as with film but the image pickup device is electronic rather than chemical. Most digicams, apart from camera phones and a few specialized types, have a standard tripod screw.

Digital cameras are incorporated into many devices ranging from PDAs and mobile phones (called camera phones) to vehicles. The Hubble Space Telescope and other astronomical devices are essentially specialized digital cameras.

Digital cameras are made in a wide range of sizes, prices and capabilities. The majority are camera phones, operated as a mobile application through the cellphone menu. Professional photographers and many amateurs use larger, more expensive digital single-lens reflex cameras (DSLR) for their greater versatility. Between these extremes lie digital compact cameras and bridge digital cameras that "bridge" the gap between amateur and professional cameras. Specialized cameras including multispectral imaging equipment and astrographs continue to serve the scientific, military, medical and other special purposes for which digital photography was invented.

Compact cameras are designed to be tiny and portable and are particularly suitable for casual and "snapshot" use, thus are also called point-and-shoot cameras. The smallest, generally less than 20 mm thick, are described as subcompacts or "ultra-compacts" and some are nearly credit card size.

Most, apart from ruggedized or water-resistant models, incorporate a retractable lens assembly allowing a thin camera to have a moderately long focal length and thus fully exploit an image sensor larger than that on a camera phone, and a mechanized lens cap to cover the lens when retracted. The retracted and capped lens is protected from keys, coins and other hard objects, thus making a thin, pocketable package. Subcompacts commonly have one lug and a short wrist strap which aids extraction from a pocket, while thicker compacts may have two lugs for attaching a neck strap.

Compact cameras are usually designed to be easy to use, sacrificing advanced features and picture quality for compactness and simplicity; images can usually only be stored using lossy compression (JPEG). Most have a built-in flash usually of low power, sufficient for nearby subjects. Live preview is almost always used to frame the photo. Most have limited motion picture capability. Compacts often have macro capability and zoom lenses but the zoom range is usually less than for bridge and DSLR cameras. Generally a contrast-detect autofocus system, using the image data from the live preview feed of the main imager, focuses the lens.

Typically, these cameras incorporate a nearly-silent leaf shutter into their lenses.

For lower cost and smaller size, these cameras typically use image sensors with a diagonal of approximately 6 mm, corresponding to a crop factor around 6. This gives them weaker low-light performance, greater depth of field, generally closer focusing ability, and smaller components than cameras using larger sensors.

Starting in 2011, some compact digital cameras can take 3D still photos. These 3D compact stereo cameras can capture 3D panoramic photos for play back on a 3D TV.

Bridge are higher-end digital cameras that physically and ergonomically resemble DSLRs and share with them some advanced features, but share with compacts the use of a fixed lens and a small sensor. Like compacts, most use live preview to frame the image. Their autofocus uses the same contrast-detect mechanism, but many bridge cameras have a manual focus mode, in some cases using a separate focus ring, for greater control. They originally "bridged" the gap between affordable point-and-shoot cameras and the then unaffordable earlier digital SLRs.

Due to the combination of big physical size but a small sensor, many of these cameras have very highly specified lenses with large zoom range and fast aperture, partially compensating for the inability to change lenses. On some, the lens qualifies as superzoom. To compensate for the lesser sensitivity of their small sensors, these cameras almost always include an image stabilization system to enable longer handheld exposures.

These cameras are sometimes marketed as and confused with digital SLR cameras since the appearance is similar. Bridge cameras lack the reflex viewing system of DSLRs, are usually fitted with fixed (non-interchangeable) lenses (although some have a lens thread to attach accessory wide-angle or telephoto converters), and can usually take movies with sound. The scene is composed by viewing either the liquid crystal display or the electronic viewfinder (EVF). Most have a longer shutter lag than a true dSLR, but they are capable of good image quality (with sufficient light) while being more compact and lighter than DSLRs. High-end models of this type have comparable resolutions to low and mid-range DSLRs. Many of these cameras can store images in a Raw image format, or processed and JPEG compressed, or both. The majority have a built-in flash similar to those found in DSLRs. Some of the earlier models from the year 2000-2004 era in the 2 to 5MP class starting with Fujifilm's Finepix 2800 are excellent performers both in color rendition and sharpness, being carefully made to sell at more than 20-times their current market value. Potential drawbacks to check are damaged zoom- and focussing mechanisms and unreliable or expensive storage media like SM-cards and "Memory Sticks".

In bright sun, the quality difference between a good compact camera and a digital SLR is minimal but bridgecams are more portable, cost less and have a similar zoom ability to dSLR. Thus a Bridge camera may better suit outdoor daytime activities, except when seeking professional-quality photos.

In low light conditions and/or at ISO equivalents above 800, most bridge cameras (or megazooms) lack in image quality when compared to even entry level DSLRs. However, they do have one major advantage, often not appreciated:- their much larger depth of field due to the small sensor as compared to a DSLR, allowing larger apertures with shorter exposure times.

The first New 3D Photo Mode of Bridge camera has announced by Olympus. Olympus SZ-30MR can take 3D photo in any mode from macro to landscape by release the shutter for the first shot, slowly pan until camera automatically takes a second image from a slightly different perspective. Due to 3D processing is in-built in camera, so an .MPO file will easily display on 3D televisions or laptops.

A line-scan camera is a camera device containing a line-scan image sensor chip, and a focusing mechanism. These cameras are almost solely used in industrial settings to capture an image of a constant stream of moving material. Unlike video cameras, line-scan cameras use a single array of pixel sensors, instead of a matrix of them. Data coming from the line-scan camera has a frequency, where the camera scans a line, waits, and repeats. The data coming from the line-scan camera is commonly processed by a computer, to collect the one-dimensional line data and to create a two-dimensional image. The collected two-dimensional image data is then processed by image-processing methods for industrial purposes.

Line-scan technology is capable of capturing data extremely fast, and at very high image resolutions. Usually under these conditions, resulting collected image data can quickly exceed 100 MB in a fraction of a second. Line-scan-camera–based integrated systems, therefore are usually designed to streamline the camera's output in order to meet the system's objective, using computer technology which is also affordable.

Line-scan cameras intended for the parcel handling industry can integrate adaptive focusing mechanisms to scan six sides of any rectangular parcel in focus, regardless of angle, and size. The resulting 2-D captured images could contain, but are not limited to 1D and 2D barcodes, address information, and any pattern that can be processed via image processing methods. Since the images are 2-D, they are also human-readable and can be viewable on a computer screen. Advanced integrated systems include video coding, optical character recognition (OCR) and finish-line cameras for high speed sports.

When digital cameras became common, a question many photographers asked was whether their film cameras could be converted to digital. The answer was yes and no. For the majority of 35 mm film cameras the answer is no, the reworking and cost would be too great, especially as lenses have been evolving as well as cameras. For most a conversion to digital, to give enough space for the electronics and allow a liquid crystal display to preview, would require removing the back of the camera and replacing it with a custom built digital unit.

The major reason why affordable Digital camera backs never became available was that the manufacturers of sensors were identical or associated with camera manufacturers that were interested in selling new, rather than extending the life of old equipment. In fact, the coming of digital cameras was a very beneficial to the Japanese camera industry, which showed signs of stagnation in the late 80s due to market saturation. The new digital SLRs were for the main part purposely made not to be downward-compatible in accepting the world's vast inventory of momentarily near-useless high-quality SLR lenses even if of the same bayonet. This in spite of the fact that one major high-end manufacturer used to advertise his pre-digital optics as being "like money in the bank". As of 2011, no DSLR has appeared to take the very common M42-Lenses. Russian and Chinese manufacturers have not been able to make a DSLR of any sort: it remains to be seen if they will, with the availability of the new 16MP APS-C size sensor MT9H004 from the US-manufacturer Aptina.

Many early professional SLR cameras, such as the Kodak DCS series, were developed from 35 mm film cameras. The technology of the time, however, meant that rather than being digital "backs" the bodies of these cameras were mounted on large, bulky digital units, often bigger than the camera portion itself. These were factory built cameras, however, not aftermarket conversions.

A notable exception is the Nikon E2, followed by Nikon E3, using additional optics to convert the 35mm format to a 2/3 CCD-sensor.

A few 35 mm cameras have had digital camera backs made by their manufacturer, Leica being a notable example. Medium format and large format cameras (those using film stock greater than 35 mm), have a low unit production, and typical digital backs for them cost over $10,000. These cameras also tend to be highly modular, with handgrips, film backs, winders, and lenses available separately to fit various needs.

The very large sensor these backs use leads to enormous image sizes. For example Phase One's P45 39 MP image back creates a single TIFF image of size up to 224.6 MB, and even greater pixel counts are available. Medium format digitals such as this are geared more towards studio and portrait photography than their smaller DSLR counterparts; the ISO speed in particular tends to have a maximum of 400, versus 6400 for some DSLR cameras. (Canon EOS-1D Mark IV and Nikon D3S have ISO 12800 plus Hi-3 ISO 102400)

The resolution of a digital camera is often limited by the image sensor (typically a CCD or CMOS sensor chip) that turns light into discrete signals, replacing the job of film in traditional photography. The sensor is made up of millions of "buckets" that essentially count the number of photons that strike the sensor. This means that the brighter the image at a given point on the sensor, the larger the value that is read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used which requires a demosaicing/interpolation algorithm. The number of resulting pixels in the image determines its "pixel count". For example, a 640x480 image would have 307,200 pixels, or approximately 307 kilopixels; a 3872x2592 image would have 10,036,224 pixels, or approximately 10 megapixels.

The pixel count alone is commonly presumed to indicate the resolution of a camera, but this simple figure of merit is a misconception. Other factors impact a sensor's resolution, including sensor size, lens quality, and the organization of the pixels (for example, a monochrome camera without a Bayer filter mosaic has a higher resolution than a typical color camera). Where such other factors are limiting the resolution, a greater pixel count does not improve the resolution, but may rather make the digital images inconveniently large and/or exacerbate image noise. Many digital compact cameras are criticized for having excessive pixels. Sensors can be so small that their 'buckets' can easily overfill; again, resolution of a sensor can become greater than the camera lens could possibly deliver.

As the technology has improved, costs have decreased dramatically. Counting the "pixels per dollar" as a basic measure of value for a digital camera, there has been a continuous and steady increase in the number of pixels each dollar buys in a new camera, in accord with the principles of Moore's Law. This predictability of camera prices was first presented in 1998 at the Australian PMA DIMA conference by Barry Hendy and since referred to as "Hendy's Law".

Since only a few aspect ratios are commonly used (mainly 4:3 and 3:2), the number of sensor sizes that are useful is limited. Furthermore, sensor manufacturers do not produce every possible sensor size, but take incremental steps in sizes. For example, in 2007 the three largest sensors (in terms of pixel count) used by Canon were the 21.1, 17.9, and 16.6 megapixel CMOS sensors.

The Joint Photography Experts Group standard (JPEG) is the most common file format for storing image data. Other file types include Tagged Image File Format (TIFF) and various Raw image formats.

Many cameras, especially professional or DSLR cameras, support a Raw image format. A raw image is the unprocessed set of pixel data directly from the camera's sensor. They are often saved in formats proprietary to each manufacturer, such as NEF for Nikon, CRW or CR2 for Canon, and MRW for Minolta. Adobe Systems has released the DNG format, a royalty free raw image format which has been adopted by at least 10 camera manufacturers.

Raw files initially had to be processed in specialized image editing programs, but over time many mainstream editing programs, such as Google's Picasa, have added support for raw images. Editing raw format images allows more flexibility in settings such as white balance, exposure compensation, color temperature, and so on. In essence raw format allows the photographer to make major adjustments without losing image quality that would otherwise require retaking the picture.

Formats for movies are AVI, DV, MPEG, MOV (often containing motion JPEG), WMV, and ASF (basically the same as WMV). Recent formats include MP4, which is based on the QuickTime format and uses newer compression algorithms to allow longer recording times in the same space.

Other formats that are used in cameras but not for pictures are the Design Rule for Camera Format (DCF), an ISO specification for the camera's internal file structure and naming, and Digital Print Order Format (DPOF), which dictates what order images are to be printed in and how many copies.

Most cameras include Exif data that provides metadata about the picture. Exif data may include aperture, exposure time, focal length, date and time taken, and location.

Recorder of deeds | Understanding and definition of the Recorder of Deeds | The regulations in installing Recorder of Deeds

Recorder of deeds is a government office tasked with maintaining public records and documents, especially records relating to real estate ownership that provide persons other than the owner of a property with real rights over that property.

Offices with similar duties (varying by jurisdiction) include registrar general, register of deeds, registrar of deeds, registrar of titles. The office of such an official may be referred to as the deeds registry or deeds office. In the United States, the recorder of deeds is often an elected county office and is called the county recorder. In some U.S. states, the functions of a recorder of deeds are a responsibility of the county clerk (or the county's clerk of court), and the official may be called a clerk-recorder or recorder-clerk.

The recorder of deeds provides a single location in which records of real rights is recorded and may be researched by interested parties. The record of deeds often maintains documents regularly recorded by the recorder of deeds include deeds, mortgages, mechanic's liens, releases and plats, among others. To allow full access to deeds recorded throughout the office history, several indexes may be maintained, which include grantor-grantee indexes, tract indexes, and plat maps. Storage methods to record registry entries include paper, microform, and computer.

The principles of statutory, case, and common law are given effect by the recorder of deeds, insofar as it relates to vested ownership in land and other real rights. Because estate in land can be held in so many complex ways, a single deeds registry provides some stability, even though it cannot "guarantee" those real rights.

The legal certainty provided by a title deed issued under the registration of the recorder of deeds is of great significance to all parties who hold, or wish to acquire rights in real property. Certainty of title is the basis for the investment of massive amounts of money in real estate development for residential, commercial, industrial and agricultural use each year. This is why the meticulous recording of registration information by the recorder of deeds is so important.

Each document recorded against title to real estate can be examined and the portion of the bundle of rights that it includes can be determined. These records can assist interested parties in researching the history of land and the chain of title for any property and purpose.

The South African system of deeds registry is hailed by many as one of the best systems of title registration in the world. The South African Registrar of Deeds is responsible for the national system of deeds offices which, through a juristic foundation and long-standing practices and procedures, has the effect of “guaranteeing” title.

The Deeds Registries Act and Sectional Titles Act are applied to regulate the deeds registry system, and form the foundation of land registration in South Africa.

In the U.S., most Recorders of Deeds are elected officials serving the area of a county or county equivalent territory.

In some states, the recorder of deeds may also act as a public posting place for documents that are not directly related to estates in land, such as corporate charters, military discharges, Uniform Commercial Code records, applications for marriage licenses, and judgments.

Deeds in a few states of the U.S. are maintained under the Torrens title system or some limited implementation of it. (For example: Iowa, Minnesota, some property in Massachusetts, Colorado, Hawaii, New York, North Carolina, Ohio and Washington.) Other U.S. states, on the other hand, maintain their deeds under Common law, typically, in chronological order with a grantor/grantee index.

Shinkansen | History and definition of the Shinkansen | State of the first manufactures high-speed train

The Shinkansen, also known as the bullet train, is a network of high-speed railway lines in Japan operated by four Japan Railways Group companies. Starting with the Tōkaidō Shinkansen in 1964, the network has expanded to currently consist of 2,387.7 km (1,483.6 mi) of lines with maximum speeds of 240–300 km/h (149–186 mph), 283.5 km (176.2 mi) of Mini-shinkansen with a maximum speed of 130 km/h (81 mph) and 10.3 km (6.4 mi) of spur lines with Shinkansen services. The network presently links most major cities on the islands of Honshu and Kyushu, with construction of a link to the northern island of Hokkaido underway and plans to increase speeds on the Tōhoku Shinkansen up to 320 km/h (199 mph). Test runs have reached 443 km/h (275 mph) for conventional rail in 1996, and up to a world record 581 km/h (361 mph) for maglev trainsets in 2003.

The popular English name bullet train is a literal translation of the Japanese term dangan ressha, a nickname given to the project while it was initially being discussed in the 1930s. The name stuck because of the original 0 Series Shinkansen's resemblance to a bullet and its high speed.

The Shinkansen name was first formally used in 1940 for a proposed standard gauge passenger and freight line between Tokyo and Shimonoseki that would have used steam and electric locomotives with a top speed of 200 km/h (120 mph). Over the next three years, the Ministry of Railways drew up more ambitious plans to extend the line to Beijing (through a tunnel to Korea) and even Singapore, and build connections to the Trans-Siberian Railway and other trunk lines in Asia. These plans were abandoned in 1943 as Japan's position in World War II worsened. However, some construction did commence on the line; several tunnels on the present-day Shinkansen date to the war-era project.

Following the end of World War II, high-speed rail was forgotten for several years while traffic of passengers and freight steadily increased on the conventional Tōkaidō Main Line along with the reconstruction of Japanese industry and economy. By the mid-1950s the Tōkaidō Line was operating at full capacity, and the Ministry of Railways decided to revisit the Shinkansen project. In 1957, Odakyu Electric Railway introduced its 3000 series SE "Romancecar" train, setting a world speed record of 145 km/h (90 mph) for a narrow gauge train. This train gave designers the confidence that they could safely build an even faster standard gauge train. Thus the first Shinkansen, the 0 series, was built on the success of the Romancecar.

In 1950s, it was widely believed that railways would soon be outdated and replaced by air travel and highways as in America and many countries in Europe. However, Shinji Sogo, President of Japan National Railways, insisted strongly on the possibility of high-speed rail, and the Shinkansen project was implemented.

Government approval came in December 1958, and construction of the first segment of the Tōkaidō Shinkansen between Tokyo and Osaka started in April 1959. The cost of constructing the Shinkansen was at first estimated at nearly 200 billion yen, which was raised in the form of a government loan, railway bonds and a low-interest loan of US$80 million from the World Bank. Initial cost estimates, however, had been deliberately understated and the actual figures were nearly double at about 400 billion yen. As the budget shortfall became clear in 1963, Sogo resigned to take responsibility.

Shinkansen literally means new trunk line, referring to the tracks, but the name is widely used inside and outside Japan to refer to the trains as well as the system as a whole. The name Superexpress, initially used for Hikari trains, was retired in 1972 but is still used in English-language announcements and signage.

The Tōkaidō Shinkansen is the world's busiest high-speed rail line. Carrying 151 million passengers a year (March 2008), it has transported more passengers (over 4 billion, network over 6 billion) than any other high speed line in the world. Between Tokyo and Osaka, the two largest metropolises in Japan, up to thirteen trains per hour with sixteen cars each (1,323 seats capacity) run in each direction with a minimum headway of three minutes between trains. Though largely a long-distance transport system, the Shinkansen also serves commuters who travel to work in metropolitan areas from outlying cities.

Japan was the first country to build dedicated railway lines for high speed travel. Because of the mountainous terrain, the existing network consisted of 3 ft 6 in (1,067 mm) narrow gauge lines, which generally took indirect routes and could not be adapted to higher speeds. Consequently, Japan had a greater need for new high speed lines than countries where the existing standard gauge or broad gauge rail system had more upgrade potential.

Among the key people credited with the construction of the first Shinkansen are – Hideo Shima, the Chief Engineer, and Shinji Sogo, the first President of Japan National Railways (JNR) who managed to persuade politicians to back the plan. Other significant people responsible for its technical development were – Tadanao Miki, Tadashi Matsudaira, and Hajime Kawanabe based at the Railway Technology Research Institute (RTRI), part of JNR. They were responsible for much of the technical development of the first line - the Tokaido Shinkansen. All three had worked on aircraft design during World War II.

To enable high-speed operation, Shinkansen uses advanced technologies compared with conventional rail, and it achieved not only high speed but also a high standard of safety and comfort. Its success has influenced other railways in the world and the importance and advantage of high-speed rail has consequently been revalued.

Shinkansen routes are completely separate from conventional rail lines (except Mini-shinkansen which goes through to conventional lines). Consequently, Shinkansen is not affected by slower local or freight trains and has the capacity to operate many high-speed trains punctually. The lines have been built without road crossings at grade.Tracks are strictly off-limits with penalties against trespassing strictly regulated by law. It uses tunnels and viaducts to go through and over obstacles rather than around them, with a minimum curve radius of 4,000 meters (2,500 meters on the oldest Tōkaidō Shinkansen).

The Shinkansen uses 1,435 mm standard gauge in contrast to the 1,067 mm narrow gauge of older lines. Continuous welded rail and Swingnose crossing are employed, eliminating gaps at turnouts and crossings. Long rails are used, joined by expansion joints to minimize gauge fluctuation due to thermal elongation and shrinkage.

A combination of ballasted and slab track are used, with slab track exclusively employed on concrete bed sections such as viaducts and tunnels. Slab track is significantly more cost-effective in tunnel sections, since the lower track height reduces the cross-sectional area of the tunnel, thereby reducing construction costs by up to 30%

The Shinkansen employs an ATC (Automatic Train Control) system, eliminating the need for trackside signals. It uses a comprehensive system of Automatic Train Protection. Centralized traffic control manages all train operations, and all tasks relating to train movement, track, station and schedule are networked and computerized.

Shinkansen uses a 25,000 V AC overhead power supply (20,000 V AC on Mini-shinkansen lines), to overcome the limitations of the 1,500 V Direct current used on the existing electrified narrow-gauge system. Power is distributed along the axles of the train to reduce the heavy axle loads under single power cars.

Shinkansen trains are electric multiple unit style, offering high acceleration and deceleration, and reduced damage to the track because of lighter vehicles. The coaches are air-sealed to ensure stable air pressure when entering tunnels at high speed.

The Shinkansen is very reliable thanks to several factors, including its near-total separation from slower traffic. In 2003, JR Central reported that the Shinkansen's average arrival time was within six seconds of the scheduled time. This includes all natural and human accidents and errors and was calculated over roughly 160,000 Shinkansen trips completed. The previous record, from 1997, was 18 seconds.

High-speed rail in China | History and definitions of the fastest trains from China

High-speed rail in China refers to any commercial train service in the People's Republic of China with an average speed of 200 km/h (120 mph) or higher. China has the world’s longest high-speed rail (HSR) network with about 8,358 km (5,193 mi) of routes in service as of January 2011 including 2,197 km (1,365 mi) of rail lines with top speeds of 350 km/h (220 mph). China high-speed rail network will be larger than all European high speed rail networks combined by the end of 2011, and it will be larger than the rest of the world combined by the end of 2014. The high-speed trains have transported 600 million passengers since their introduction on April 18, 2007, with average daily ridership of 237,000 in 2007, 349,000 in 2008, 492,000 in 2009, and 796,000 in 2010.

China's high speed rail lines consist of upgraded conventional rail lines, newly-built high-speed passenger designated lines (PDLs), and the world’s first high-speed commercial magnetic levitation (maglev) line. The country is undergoing an HSR building boom. With generous funding from the Chinese government's economic stimulus program, 17,000 km (11,000 mi) of high-speed lines are now under construction. The entire HSR network will reach 13,073 km (8,123 mi) by the end of 2011 and 25,000 km (16,000 mi) by the end of 2015.

China is the first and only country to have commercial train service on conventional rail lines that can reach 350 km/h (217 mph). Notable examples of HSR lines include:
  1. The Wuhan–Guangzhou High-Speed Railway, a passenger-dedicated trunk line opened in 2009, that reduced the 968 km (601 mi) journey between the largest cities in central and southern China to 3 hours. Trains reach top speeds of 350 km/h (220 mph) and average 310 km/h (190 mph) for the entire trip.
  2. The Beijing-Tianjin Intercity Railway, an intercity express line opened in 2008, that shortened the 117 km (73 mi) commute between the two largest cities in northern China to 30 minutes. Trains reach top speeds of 330 km/h (210 mph) and average 234 km/h (145 mph).
  3. The Shanghai Maglev Train, an airport rail link service opened in 2004, that travels 30 km (18 mi.) in 7 minutes, 20 seconds, averaging 240 km (150 mph) and reaching top speed of 431 km/h (268 mph).
China’s initial high speed trains were imported or built under technology transfer agreements with foreign train-makers including Siemens, Bombardier and Kawasaki Heavy Industries. However, Chinese engineers re-designed and further improved the internal components of the train in order for the train to run at much higher speed. China currently holds close to 1000 local and international patents for high speed rail technologies. Almost all of the chinese high-speed trains are now made in China and its latest and fastest train 380A model is fully designed and made in China.

The Beijing–Shanghai High-Speed Railway, set to open in June 2011, will use the new CRH380 trainsets, which can reach a top operational speed of 380 km/h (236 mph). However, after concerns over safety and opposition from passengers against the high ticket prices for the high-speed rail network, Chinese officials announced that some trains will be subject to a 300 km/h speed limit. Trains on the Beijing-Tianjin high-speed line and a few other inter-city lines will continue to run at a top speed of 350 km/h.

State planning for China's high speed railway began in the early 1990s. The Ministry of Railways (MOR) submitted a proposal to build a high speed railway between Beijing and Shanghai to the National People's Congress in December 1990. At the time, the existing Beijing-Shanghai railway was already reaching capacity, and the proposal was jointly studied by the Science & Technology Commission, State Planning Commission, State Economic & Trade Commission, and the MOR. In December 1994, the State Council commissioned a feasibility study for the line. Policy planners debated the necessity and economic viability of high-speed rail service. Supporters argued that high-speed rail would boost future economic growth. Opponents noted that high-speed rail in other countries were expensive and mostly unprofitable. Overcrowding on existing rail lines, they said, could be solved by expanding capacity through higher speed and frequency of service. In 1995, Premier Li Peng announced that preparatory work on the Beijing Shanghai HSR would begin in the 9th Five Year Plan (1996–2000), but construction was not scheduled until the first decade of the 21st century.

Despite setting speed records on test tracks, the DJJ2, DJF2 and other domestically-produced high speed trains were insufficiently reliable for commercial operation. The State Council turned to advanced technology abroad but made clear in directives that China's HSR expansion cannot only benefit foreign economies. China's expansion must also be used to develop its own high-speed train building capacity through technology transfers. The State Council, MOR and state-owned train builders, the China North Car (CNR) and China South Car (CSR) used China's large market and competition among foreign train-makers to induce technology transfers.

In 2003, the MOR was believed to favor Japan's Shinkansen technology, especially the 700 series, which was later exported to Taiwan. The Japanese government touted the 40-year track record of the Shinkansen and offered favorable financing. A Japanese report envisioned a winner-take all scenario in which the winning technology provider would supply China's trains for over 8,000 km of high-speed rail. However, Chinese netizens angry with Japan's World War II atrocities organized a web campaign to oppose the awarding of HSR contracts to Japanese companies. The protests gathered over a million signatures and politicized the issue. The MOR delayed the decision, broadened the bidding and adopted a diversified approach to adopting foreign high-speed train technology.

In June 2004, the MOR solicited bids to make 200 high speed train sets that can run 200 km/h. Alstom of France, Siemens of Germany, Bombardier Transportation based in Germany and a Japanese consortium led by Kawasaki all submitted bids. With the exception of Siemens which refused to lower its demand of RMB(¥) 350 million per train set and €390 million for the technology transfer, the other three were all awarded portions of the contract. All had to adapt their HSR train-sets to China's own common standard and assemble units through local joint ventures (JV) or cooperate with Chinese manufacturers. Bombardier, through its JV with CSR's Sifang Locomotive and Rolling Stock Co (CSR Sifang), Bombardier Sifang (Qingdao) Transportation Ltd (BST). won an order for 40 eight-car train sets based on Bombardier's Regina design. These trains, designated CRH1A, were delivered in 2006. Kawasaki won an order for 60 train sets based on its E2 Series Shinkansen for ¥9.3 billion. Of the 60 train sets, three were directly delivered from Nagoya, Japan, six were kits assembled at CSR Sifang Locomotive & Rolling Stock, and the remaining 51 were made in China using transferred technology with domestic and imported parts. They are known as CRH2A. Alstom also won an order for 60 train sets based on the New Pendolino developed by Alstom-Ferroviaria in Italy. The order had a similar delivery structure with three shipped directly from Savigliano along with six kits assembled by CNR's Changchun Railway Vehicles, and the rest locally made with transferred technology and some imported parts. Trains with Alstom technology carry the CRH5 designation.

The following year, Siemens reshuffled its bidding team, lowered prices, joined the bidding for 300 km/h trains and won a 60-train set order. It supplied the technology for the CRH3C, based on the ICE3 (class 403) design, to CNR's Tangshan Railway Vehicle Co. Ltd. The transferred technology includes assembly, body, bogie, traction current transforming , traction transformers, traction motors, traction control, brake systems, and train control networks.

China's high speed rail expansion is entirely managed, planned and financed by the government. After committing to conventional-track high speed rail in 2006, the state has embarked on an ambitious campaign to build passenger-dedicated high-speed rail lines, which accounts for a large part of the government's growing budget for rail construction. Total investment in new rail lines grew from $14 billion in 2004 to $22.7 and $26.2 billion in 2006 and 2007. In response to the global economic recession, the government accelerated the pace of HSR expansion to stimulate economic growth. Total investments in new rail lines including HSR reached $49.4 billion in 2008 and $88 billion in 2009. In all, the state plans to spend $300 billion to build a 25,000 km (16,000 mi.) HSR network by 2020.

China's high-speed rail construction projects are highly capital intensive. They are primarily funded by state owned banks and financial institutions, which lend money to the MOR and local governments. The MOR, through its financing arm, the China Rail Investment Corp, issued an estimated ¥1 trillion (US$150 billion in 2010 dollars) in debt to finance HSR construction from 2006 to 2010, including ¥310 billion in the first 10 months of 2010. CRIC has also raised some capital through equity offerings; in the spring of 2010, CRIC sold a 4.5 percent stake in the Beijing-Shanghai High Speed Railway to the Bank of China for ¥6.6 billion and a 4.537 percent stake to the public for ¥6 billion. CRIC retained 56.2 percent ownership on that line. As of 2010, the CRIC-bonds are considered to be relatively safe investments because they are backed by assets (the railways) and implicitly by the government.

China Railway High-speed runs different electric multiple unit (trainsets), the designs of which all are imported from other nations and given the designations CRH-1 through CRH-5. CRH trainsets are intended to provide fast and convenient travel between cities. Some of the trainsets are manufactured locally through technology transfer, a key requirement for China. The signalling, track and support structures, control software and station design are developed domestically with foreign elements as well, so the system as a whole could be called Chinese. China currently holds many new patents related to the internal components of these train sets since they have re-designed major components so the trains can run at a much higher speed than the original foreign train design.

CRH1A, B,E, CRH2A, B,E, and CRH5A are designed for a maximum operating speed (MOR) of 200 km/h and can reach up to 250 km/h. CRH3C and CRH2C designs have an MOR of 300 km/h, and can reach up to 350 km/h, with a top testing speed more than 380 km/h. However, in practical terms, issues such as cost of maintenance, comfort, cost and safety make the maximum design speed of more than 380 km/h impractical and remain limiting factors.

Chinese train-makers and rail builders have signed agreements to build HSRs in Turkey, Venezuela and Argentina and are bidding on HSR projects in the United States, Russia, Saudi Arabia, Brazil (São Paulo to Rio de Janeiro) and Myanmar, and other countries. They are competing directly with the established European and Japanese manufacturers, and sometimes partnering with them. In Saudi Arabia's Haramain High Speed Rail Project, Alstom partnered with China Railway Construction Corp. to win the contract to build phase I of the Mecca to Medina HSR line, and Siemens has joined CSR to bid on phase II. China is also competing with Japan, Germany, South Korea, Spain, France and Italy to bid for California's high-speed rail line project, which would connect San Francisco and Los Angeles. In November 2009, the MOR signed preliminary agreements with the state's high speed rail authority and General Electric (GE) under which China would license technology, provide financing and furnish up to 20 percent of the parts with the remaining sourced from American suppliers, and final assembly of the rolling stock in the United States.

General Electric | History and definition of General Electric| Symbol of General Electric

General Electric
General Electric Company (NYSE: GE), or GE, is an American multinational conglomerate corporation incorporated in Schenectady, New York and headquartered in Fairfield, Connecticut, United States. The company operates through five segments: Energy, Technology Infrastructure, NBC Universal, Capital Finance and Consumer & Industrial. In 2011, Forbes ranked GE as the world's third largest company after JPMorgan Chase and HSBC, based on a formula that compared the total sales, profits, assets and market value of several multinational companies. The company has 287,000 employees around the world.

By 1890, Thomas Edison had brought together several of his business interests under one corporation to form Edison General Electric. At about the same time, Thomson-Houston Electric Company, under the leadership of Charles Coffin, gained access to a number of key patents through the acquisition of a number of competitors. Subsequently, General Electric was formed by the 1892 merger of Edison General Electric of Schenectady, New York and Thomson-Houston Electric Company of Lynn, Massachusetts and both plants remain in operation under the GE banner to this day. The company was incorporated in New York, with the Schenectady plant as headquarters for many years thereafter. Around the same time, General Electric's Canadian counterpart, Canadian General Electric, was formed.

In 1896, General Electric was one of the original 12 companies listed on the newly formed Dow Jones Industrial Average and still remains after 115 years, the only one remaining on the Dow (though it has not continuously been in the DOW index).

In 1911 the National Electric Lamp Association (NELA) was absorbed into General Electric's existing lighting business. GE then established its lighting division headquarters at Nela Park in East Cleveland, Ohio. Nela Park is still the headquarters for GE's lighting business. In 1935, GE was one of the top 30 companies traded at the London Stock Exchange.

GE's long history of working with turbines in the power generation field gave them the engineering know-how to move into the new field of aircraft turbosuperchargers. Led by Sanford Moss, GE introduced the first superchargers during World War I, and continued to develop them during the Interwar period. They became indispensable in the years immediately prior to World War II, and GE was the world leader in exhaust-driven supercharging when the war started. This experience, in turn, made GE a natural selection to develop the Whittle W.1 jet engine that was demonstrated in the United States in 1941. Although their early work with Whittle's designs was later handed to Allison Engine Company, GE Aviation emerged as one of the world's largest engine manufacturers, second only to the well-founded, and older, British company; Rolls-Royce plc, which led the way in innovative, reliable and efficient, high-performance, heavy-duty, jet engine design and manufacture.

In 2002 GE acquired the wind power assets of Enron during its bankruptcy proceedings. Enron Wind was the only surviving U.S. manufacturer of large wind turbines at the time, and GE increased engineering and supplies for the Wind Division and doubled the annual sales to $1.2 billion in 2003. It acquired ScanWind in 2009.

Some consumers boycotted GE light bulbs, refrigerators and other products in the 1980s and 1990s to protest GE’s role in nuclear weapons production.

GE was one of the eight major computer companies through all of the 1960s — with IBM, the largest, called "Snow White" followed by the "Seven Dwarfs": Burroughs, NCR, Control Data Corporation, Honeywell, RCA, UNIVAC and GE.

GE had an extensive line of general purpose and special purpose computers. Among them were the GE 200, GE 400, and GE 600 series general purpose computers, the GE 4010, GE 4020, and GE 4060 real time process control computers, the Datanet 30 and Datanet 355 message switching computers (Datanet 30 and 355 were also used as front end processors for GE mainframe computers).A Datanet 500 computer was designed, but never sold.

GE is a multinational conglomerate headquartered in Fairfield, Connecticut. Its New York main offices are located at 30 Rockefeller Plaza in Rockefeller Center, known as the GE Building for the prominent GE logo on the roof. NBC's headquarters and main studios are also located in the building. Through its RCA subsidiary, it has been associated with the Center since its construction in the 1930s.

The company describes itself as composed of a number of primary business units or "businesses." Each unit is itself a vast enterprise, many of which would, even as a standalone company, rank in the Fortune 500. The list of GE businesses varies over time as the result of acquisitions, divestitures and reorganizations. GE's tax return is the largest return filed in the United States; the 2005 return was approximately 24,000 pages when printed out, and 237 megabytes when submitted electronically. The company also "spends more on U.S. lobbying than any other company."

In 2005 GE launched its "Ecomagination" initiative in an attempt to position itself as a "green" company. GE is currently one of the biggest players in the wind power industry, and it is also developing new environment-friendly products such as hybrid locomotives, desalination and water reuse solutions, and photovoltaic cells. The company "plans to build the largest solar-panel-making factory in the U.S.," and has set goals for its subsidiaries to lower their greenhouse gas emissions.

On May 21, 2007, GE announced it would sell its GE Plastics division to petrochemicals manufacturer SABIC for net proceeds of $11.6 billion. The transaction took place on August 31, 2007, and the company name changed to SABIC Innovative Plastics, with Brian Gladden as CEO.

Jeffrey Immelt is the current chairman of the board and chief executive officer of GE. He was selected by GE's Board of Directors in 2000 to replace John Francis Welch Jr. (Jack Welch) following his retirement. Previously, Immelt had headed GE's Medical Systems division (now GE Healthcare) as its President and CEO. He has been with GE since 1982 and is on the board of two non-profit organizations.

His tenure as the Chairman and CEO started at a time of crisis — he took over the role on September 7, 2001 four days before the terrorist attacks on the United States, which killed two employees and cost GE's insurance business $600 million — as well as having a direct effect on the company's Aircraft Engines sector. Immelt has also been selected as one of President Obama's financial advisors concerning the economic rescue plan.

CEO Jeffrey Immelt had a set of changes in the presentation of the brand commissioned in 2004, after he took the reins as chairman, to unify the diversified businesses of GE. The changes included a new corporate color palette, small modifications to the GE Logo, a new customized font (GE Inspira), and a new slogan, "imagination at work" replacing the longtime slogan "we bring good things to life", composed by David Lucas. The standard requires many headlines to be lowercased and adds visual "white space" to documents and advertising to promote an open and approachable company. The changes were designed by Wolff Olins and are used extensively on GE's marketing, literature and website.

Through these businesses, GE participates in a wide variety of markets including the generation, transmission and distribution of electricity (e.g. nuclear, gas and solar), lighting, industrial automation, medical imaging equipment, motors, railway locomotives, aircraft jet engines, and aviation services. It co-owns NBC Universal with Comcast. Through GE Commercial Finance, GE Consumer Finance, GE Equipment Services, and GE Insurance it offers a range of financial services as well. It has a presence in over 100 countries.

Since over half of GE's revenue is derived from financial services, it is arguably a financial company with a manufacturing arm. It is also one of the largest lenders in countries other than the United States, such as Japan. Even though the first wave of conglomerates (such as ITT Corporation, Ling-Temco-Vought, Tenneco, etc.) fell by the wayside by the mid-1980s, in the late 1990s, another wave (consisting of Westinghouse, Tyco, and others) tried and failed to emulate GE's success.

During the 2011 Fukushima I Nuclear Power Plant catastrophe it became public that the six reactors in the plant had been designed by General Electric and that critics had opposed GE's design as far back as 1972.

In March 2011, The New York Times reported that despite earning $14.2 billion in worldwide profits, including more than $5 billion from U.S. operations, General Electric did not owe taxes in 2010. General Electric had a tax benefit of $3.2 billion. This same article also pointed out that despite their continually diminishing tax liability since the 1990's, GE has laid off one-fifth of their American workers since 2002.

GE was the focus of a 1991 short subject Academy Award-winning documentary, Deadly Deception: General Electric, Nuclear Weapons, and Our Environment, that juxtaposed GE's rosy "We Bring Good Things To Life" commercials with the true stories of workers and neighbors whose lives have been affected by the company's activities involving nuclear weapons.

GE was defamed on March 14, 1998, during the TV Funhouse segment of Saturday Night Live entitled "Conspiracy Theory Rock." This segment aired only once and was subsequently pulled by NBC.

GE's corporate culture and management practices are frequently lampooned in the NBC television series 30 Rock. In the first season episode "The Rural Juror", character Jack Donaghy opens a complex organization chart that depicts the ownership structure of General Electric's subsidiaries. The chart reveals that NBC is a subsidiary of Sheinhardt Wig Company, and NBC in turn owns subsidiaries not related to broadcasting or entertainment production.

Tokyo Electric Power Company | Largest electricity company in the world | History and definition of Electric Power Company | Symbol of TEPCO

Tepco
The Tokyo Electric Power Company, Incorporated , also known as Toden or TEPCO, is an electric utility servicing Japan's Kantō region, Yamanashi Prefecture, and the eastern portion of Shizuoka Prefecture. This area includes Tokyo. Its headquarters are located in Uchisaiwaicho, Chiyoda, Tokyo, and international branch offices exist in Washington, D.C., and London.

Tepco is the fourth largest electric power company in the world (1st: E.ON, 2nd: Électricité de France, 3rd: RWE), and the largest to hail from Asia. The amount of electricity it sells annually is the same as the amount Italy uses in a year. Tepco has one-third of the Japanese electric market. Tepco is the largest of the 10 electric utilities in Japan.

In 2007 Tepco was forced to shut the Kashiwazaki-Kariwa Nuclear Power Plant after the Niigata-Chuetsu-Oki Earthquake. That year it posted its first loss in 28 years. Corporate losses continued until the plant reopened in 2009. Following the March 2011 Tōhoku earthquake and tsunami, its power plant at Fukushima Daiichi was the site of a continuing nuclear disaster, one of the world's most serious. TEPCO could face ¥2 trillion ($23.6 billion) in special losses in the current business year to March 2012, and Japan plans to put TEPCO under effective state control as a guarantee for compensation payments to people affected by radiation.

Japan's ten regional electric companies, including TEPCO, were established in 1951 with the end of the state-run electric industry regime for national wartime mobilization.

In the 1950s, the company's primary goal was to facilitate a rapid recovery from the infrastructure devastation of World War II. After the recovery period, the company had to expand its supply capacity to catch up with the country's rapid economic growth by developing fossil fuel power plants and a more efficient transmission network.

In the 1960s and 1970s, the company faced the challenges of increased environmental pollution and oil shocks. TEPCO began addressing environmental concerns through expansion of its LNG fueled power plant network as well as greater reliance on nuclear generation. The first nuclear unit at the Fukushima Dai-ichi (Fukushima I) nuclear power plant began operational generation on March 26, 1970.

During the 1980s and 1990s, the widespread use of air-conditioners and IT/OA appliances resulted a gap between day and night electricity demand. In order to reduce surplus generation capacity and increase capacity utilization, TEPCO developed pumped storage hydroelectric power plants and promoted thermal storage units.

Recently, TEPCO is expected to play a key role in achieving Japan's targets for reduced carbon dioxide emissions under the Kyoto Protocol. It also faces difficulties related to the trend towards deregulation in Japan's electric industry as well as low power demand growth. In light of these circumstances, TEPCO launched an extensive sales promotion campaign called 'Switch!', promoting all-electric housing in order to both achieve the more efficient use of its generation capacity as well as erode the market share of gas companies.

The company's power generation consists of two main networks. Fossil fuel power plants around Tokyo Bay are used for peak load supply and nuclear reactors in Fukushima and Niigata Prefecture provide base load supply. Additionally, hydroelectric plants in the mountainous areas outside the Kanto Plain, despite their relatively small capacity compared to fossil fuel and nuclear generation, remain important in providing peak load supply. The company also purchases electricity from other regional or wholesale electric power companies like Tohoku Electric Power Co., J-POWER, and Japan Atomic Power Company.

The company has built a radiated and circular grid between power plants and urban/industrial demand areas. Each transmission line is designed to transmit electricity at high-voltage (66-500kV) between power plants and substations. Normally transmission lines are strung between towers, but within the Tokyo metropolitan area high-voltage lines are located underground.

From substations, electricity is transmitted via the distribution grid at low-voltage (22-6kV). For high-voltage supply to large buildings and factories, distribution lines are directly connected to customers' electricity systems. In this case, customers must purchase and set up transformers and other equipment to run electric appliances. For low voltage supply to houses and small shops, distribution lines are first connected to the company's transformers (seen on utility poles and utility boxes), converted to 100/200V, and finally connected to end users.

Under normal conditions, TEPCO's transmission and distribution infrastructure is notable as one of the most reliable electricity networks in the world. Blackout frequency and average recovery time compares favorably with other electric companies in Japan as well as within other developed countries. The company instituted its first-ever rolling blackouts following the shutdown of the Fukushima I and II plants which were close to the epicenter of the March 2011 earthquake. For example on the morning of Tuesday, March 15, 2011, 700,000 households had no power for three hours. The company had to deal with a 10 million kW gap between demand and production on March 14, 2011.

On August 29, 2002, the government of Japan revealed that TEPCO was guilty of false reporting in routine governmental inspection of its nuclear plants and systematic concealment of plant safety incidents. All seventeen of its boiling-water reactors were shut down for inspection as a result. TEPCO's chairman Hiroshi Araki, President Nobuya Minami, Vice-President Toshiaki Enomoto, as well as the advisers Shō Nasu and Gaishi Hiraiwa stepped-down by September 30, 2002. The utility "eventually admitted to two hundred occasions over more than two decades between 1977 and 2002, involving the submission of false technical data to authorities". Upon taking over leadership responsibilities, TEPCO's new president issued a public commitment that the company would take all the countermeasures necessary to prevent fraud and restore the nation's confidence. By the end of 2005, generation at suspended plants had been restarted, with government approval.

In 2007, however, the company announced to the public that an internal investigation had revealed a large number of unreported incidents. These included an unexpected unit criticality in 1978 and additional systematic false reporting, which had not been uncovered during the 2002 inquiry. Along with scandals at other Japanese electric companies, this failure to ensure corporate compliance resulted in strong public criticism of Japan's electric power industry and the nation's nuclear energy policy. Again, the company made no effort to identify those responsible.

On 11 March 2011 several nuclear reactors in Japan were badly damaged by the 2011 Tōhoku earthquake and tsunami.

The Tōkai Nuclear Power Plant lost external electric power, experienced the failure of one of its two cooling pumps, and two of its three emergency power generators. External electric power could only be restored two days after the earthquake.

The Japanese government declared an “atomic power emergency” and evacuated thousands of residents living close to TEPCO's Fukushima I plant. Reactors 4, 5 and 6 had been shut down prior to the earthquake for planned maintenance. The remaining reactors were shut down automatically after the earthquake, but the subsequent tsunami flooded the plant, knocking out emergency generators needed to run pumps which cool and control the reactors. The flooding and earthquake damage prevented assistance being brought from elsewhere. Over the following days there was evidence of partial nuclear meltdowns in reactors 1, 2 and 3; hydrogen explosions destroyed the upper cladding of the building housing reactors 1 and 3; an explosion damaged reactor 2's containment; and severe fires broke out at reactor 4.

The Japanese authorities rated the events at reactors 1, 2 and 3 as a level 5 (Accident With Wider Consequences) on the International Nuclear Event Scale, while the events at reactor 4 were placed at level 3 (Serious Incident). The situation as a whole was rated as level 7 (Major Accident). On 20 March, Japan's chief cabinet secretary Yukio Edano "confirmed for the first time that the nuclear complex — with heavy damage to reactors and buildings and with radioactive contamination throughout — would be closed once the crisis was over." At the same time, questions are being asked, looking back, about whether company management waited too long before pumping seawater into the plant, a measure that would ruin and has now ruined the reactors; and, looking forward, "whether time is working for or against the workers and soldiers struggling to re-establish cooling at the crippled plant." One report noted that defense minister, Toshimi Kitazawa, on 21 March had committed "military firefighters to spray water around the clock on an overheated storage pool at Reactor No. 3." The report concluded with "a senior nuclear executive who insisted on anonymity but has many contacts in Japan sa[ying that] ... caution ... [as] plant operators have been struggling to reduce workers’ risk ... had increased the risk of a serious accident. He suggested that Japan’s military assume primary responsibility. 'It’s the same trade-off you have to make in war, and that is the sacrifice of a few for the safety of many,' he said. 'But a corporation just cannot do that.'"

There has been considerable criticism to the way TEPCO handled the crisis. It was reported that seawater was used only after Prime Minister Naoto Kan ordered it following an explosion at one reactor the evening of 12 March, though executives had started considering it that morning. Tepco didn't begin using seawater at other reactors until 13 March. Referring to that same early decision-making sequence, "Michael Friedlander, a former senior operator at a Pennsylvania power plant with General Electric reactors similar to the troubled ones in Japan, said the crucial question is whether Japanese officials followed G.E.’s emergency operating procedures." Kuni Yogo, a former atomic energy policy planner in Japan’s Science and Technology Agency and Akira Omoto, a former Tepco executive and a member of the Japanese Atomic Energy Commission both questioned Tepco's management's decisions in the crisis. Kazuma Yokota, a safety inspector with Japan's Nuclear and Industrial Safety Agency, or NISA, was at Fukushima I at the time of the earthquake and tsunami and provided details of the early progression of the crisis.
 
Kansas-City-Star-asbaquez © 2010 | Designed by Chica Blogger | Back to top